WO2023071884A1 - Procédé de détection de regard, procédé de commande pour dispositif électronique et dispositifs associés - Google Patents

Procédé de détection de regard, procédé de commande pour dispositif électronique et dispositifs associés Download PDF

Info

Publication number
WO2023071884A1
WO2023071884A1 PCT/CN2022/126148 CN2022126148W WO2023071884A1 WO 2023071884 A1 WO2023071884 A1 WO 2023071884A1 CN 2022126148 W CN2022126148 W CN 2022126148W WO 2023071884 A1 WO2023071884 A1 WO 2023071884A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
face
gaze
coordinates
gaze point
Prior art date
Application number
PCT/CN2022/126148
Other languages
English (en)
Chinese (zh)
Inventor
龚章泉
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2023071884A1 publication Critical patent/WO2023071884A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Definitions

  • the present application relates to the technical field of consumer electronics, and in particular to a gaze detection method, a control method for electronic equipment, a detection device, a control device, electronic equipment, and a non-volatile computer-readable storage medium.
  • electronic devices can estimate a user's gaze point by collecting face images.
  • the present application provides a gaze detection method, a control method of an electronic device, a detection device, a control device, an electronic device and a non-volatile computer-readable storage medium.
  • the gaze detection method of an embodiment of the present application includes determining the pose information of the face according to the face information, determining the reference gaze point coordinates according to the face information; in response to the pose information being greater than a preset threshold, determining Correction parameters: determine gaze information according to the coordinates of the reference gaze point and the correction parameters.
  • a detection device includes a first determination module, a second determination module and a third determination module.
  • the first determination module is used to determine the pose information of the face according to the face information, and determine the reference gaze point coordinates according to the face information;
  • the second determination module is used to respond to the pose information being greater than a preset threshold, Determine correction parameters according to the posture information;
  • the third determination module is configured to determine gaze information according to the coordinates of the reference gaze point and the correction parameters.
  • An electronic device includes a processor, the processor is configured to determine the pose information of the face according to the face information, and determine the coordinates of a reference gaze point according to the face information; in response to the pose information being greater than a preset threshold , determining correction parameters according to the posture information; determining gaze information according to the coordinates of the reference gaze point and the correction parameters.
  • the gaze detection method, detection device and electronic equipment of the present application after obtaining the face information, first calculate the face posture through the face information, and if the posture information is greater than the preset threshold, it will affect the calculation accuracy of the gaze point coordinates , and then calculate the coordinates of the reference gaze point according to the face information, and then calculate the correction parameters according to the attitude information, so that the coordinates of the reference gaze point can be corrected according to the correction parameters, so as to prevent the face shooting angle from being too large in the obtained face information
  • the impact on gaze detection can improve the accuracy of gaze detection.
  • the method for controlling an electronic device includes determining the pose information of the face according to the face information, determining the coordinates of a reference gaze point according to the face information; in response to the pose information being greater than a preset threshold, determining correction parameters; determining gaze information according to the reference gaze point coordinates and the correction parameters; and controlling the electronic device according to the gaze information.
  • the control device in the embodiment of the present application includes an acquisition module, a first determination module and a second determination module.
  • the acquisition module is used to determine the pose information of the face according to the face information, and determine the reference gaze point coordinates according to the face information;
  • the first determination module is used to respond to the pose information being greater than a preset threshold, according to the set
  • the posture information determines correction parameters;
  • the second determination module is used to determine gaze information according to the reference gaze point coordinates and the correction parameters;
  • the electronic device includes a processor, the processor is used to determine the pose information of the face according to the face information, and determine the reference gaze point coordinates according to the face information; in response to the pose information being greater than the preset determining a correction parameter according to the posture information; determining gaze information according to the coordinates of the reference gaze point and the correction parameter; and controlling the electronic device according to the gaze information.
  • the processors are made to execute a gaze detection method or a control method.
  • the gaze detection method includes that the gaze detection method includes determining the pose information of the face according to the face information, determining the coordinates of the reference gaze point according to the face information; in response to the pose information being greater than a preset threshold, determining according to the pose information Correction parameters: determine gaze information according to the coordinates of the reference gaze point and the correction parameters.
  • the control method of the electronic device includes determining the pose information of the face according to the face information, determining the reference gaze point coordinates according to the face information; in response to the pose information being greater than a preset threshold, determining a correction parameter according to the pose information ; determining gaze information according to the coordinates of the reference gaze point and the correction parameters; and controlling the electronic device according to the gaze information.
  • FIG. 1 is a schematic flow chart of a gaze detection method in some embodiments of the present application.
  • FIG. 2 is a block diagram of a detection device in some embodiments of the present application.
  • FIG. 3 is a schematic plan view of an electronic device in some embodiments of the present application.
  • Fig. 4 is a schematic diagram of connection between an electronic device and a cloud server in some embodiments of the present application
  • FIG. 5 to 7 are schematic flowcharts of gaze detection methods in some embodiments of the present application.
  • Fig. 8 is a schematic structural diagram of a detection model in some embodiments of the present application.
  • FIG. 9 is a schematic flowchart of a method for controlling an electronic device in some embodiments of the present application.
  • Fig. 10 is a block diagram of a control device in some embodiments of the present application.
  • 11 to 14 are schematic diagrams of scenarios of control methods in some embodiments of the present application.
  • FIG. 15 and Figure 16 are schematic flow charts of the control method in some embodiments of the present application.
  • Figure 17 and Figure 18 are schematic diagrams of scenarios of the control method in some embodiments of the present application.
  • FIG. 19 is a schematic flowchart of a control method in some embodiments of the present application.
  • Fig. 20 is a schematic diagram of connection between a processor and a computer-readable storage medium in some embodiments of the present application.
  • the gaze detection method of the present application includes determining the pose information of the face according to the face information, and determining the reference gaze point coordinates according to the face information; in response to the pose information being greater than a preset threshold, determining the correction parameters according to the pose information; and according to the reference gaze point coordinates and calibration parameters to determine gaze information.
  • the gaze detection method further includes: in response to the gesture information being less than a preset threshold, calculating reference gaze point coordinates according to face information as gaze information.
  • the posture information includes a posture angle
  • the posture angle includes a pitch angle and a yaw angle.
  • judging whether the posture information of the face is greater than a preset threshold includes: judging the pitch angle according to the face information Or whether the yaw angle is greater than a preset threshold.
  • the gaze detection method further includes: obtaining a training sample set, the training sample set includes a first type of sample whose posture information of a human face is less than a preset threshold and a second type of sample whose posture information of a human face is greater than a preset threshold samples; training a preset detection model according to the first type of samples and the second type of samples; determining the correction parameters according to the attitude information, including: determining the correction parameters according to the attitude information based on the detection model.
  • the detection model includes a gaze point detection module and a correction module, and the detection model is trained according to the first type of samples and the second type of samples, including: inputting the first type of samples into the gaze point detection module to output the first training coordinates; input the second type of samples into the gaze point detection module and the correction module to output the second training coordinates; based on the preset loss function, according to the first preset coordinates and the first training coordinates corresponding to the first type of samples, calculate the second A loss value, and calculate the second loss value according to the second preset coordinates and the second training coordinates corresponding to the second type of samples; adjust the detection model according to the first loss value and the second loss value until the detection model converges.
  • the first difference of the first loss value corresponding to any two samples of the first type and the first difference of the first loss value corresponding to any two samples of the second type
  • N is a positive integer greater than 1; or, when the first loss value and the second loss value are both less than the predetermined loss threshold, the detection model is determined convergence.
  • the face information includes a face mask, a left-eye image and a right-eye image
  • the face mask is used to indicate the position of the face in the image
  • the coordinates of the reference gaze point are calculated according to the face information, including : Calculate the position information of the face relative to the electronic device according to the face mask; calculate the coordinates of the reference gaze point according to the position information, the left eye image and the right eye image.
  • the face information includes face feature points
  • the pose information includes pose angles and three-dimensional coordinate offsets
  • the correction parameters include rotation matrices and translation matrices
  • the pose information of the face is determined according to the face information, including: Calculate the attitude angle and three-dimensional coordinate offset of the face feature points; calculate the correction parameters according to the attitude information, including: calculate the rotation matrix according to the attitude angle, and calculate the translation matrix according to the three-dimensional coordinate offset.
  • the electronic device control method of the present application includes determining the pose information of the face according to the face information, determining the reference gaze point coordinates according to the face information; in response to the pose information being greater than a preset threshold, determining the correction parameters according to the pose information; according to the reference gaze point Coordinates and calibration parameters, determining gaze information; and controlling electronic equipment based on the gaze information.
  • control method further includes: in response to the gesture information being less than a preset threshold, calculating reference gaze point coordinates according to face information as gaze information.
  • the face information includes a face mask, a left-eye image and a right-eye image
  • the face mask is used to indicate the position of the face in the image
  • the coordinates of the reference gaze point are calculated according to the face information, including : Calculate the position information of the face relative to the electronic device according to the face mask; calculate the coordinates of the reference gaze point according to the position information, the left eye image and the right eye image.
  • the face information includes face feature points
  • the pose information includes pose angles and three-dimensional coordinate offsets
  • the correction parameters include rotation matrices and translation matrices
  • the correction parameters are calculated according to the pose information, including: according to face feature points Calculate the attitude angle and three-dimensional coordinate offset; calculate the rotation matrix according to the attitude angle, and calculate the translation matrix according to the three-dimensional coordinate offset.
  • the gazing information includes gaze point coordinates.
  • the control method further includes: Within the duration, acquire the photographed image; respond to the face information contained in the photographed image; control the electronic device according to the gaze information, including: responding to the coordinates of the gaze point being located in the display area of the display screen, continuing to light the screen for a second predetermined duration.
  • the display area is associated with a preset coordinate range
  • the control method further includes: when the gaze point coordinates are within the preset coordinate range, determining that the gaze point coordinates are located in the display area.
  • the control method before determining the posture information of the human face according to the human face information and determining the coordinates of the reference point of gaze according to the human face information, the control method further includes: in response to a situation where the electronic device does not receive an input operation, acquiring a captured image ; Control the electronic device according to the gaze information, including: in response to the captured image containing a human face and the gaze point coordinates are located in the display area, adjusting the display brightness of the display screen to the first predetermined brightness; in response to the captured image not containing a human face, or shooting If the image contains a human face and the gaze point coordinates are outside the display area, the display brightness is adjusted to a second predetermined brightness, and the second predetermined brightness is smaller than the first predetermined brightness.
  • the detection device of the present application includes a first determination module, a second determination module and a third determination module.
  • the first determination module is used to determine the pose information of the face according to the face information, and determine the coordinates of the reference gaze point according to the face information;
  • the second determination module is used to determine the correction parameters according to the pose information in response to the pose information being greater than a preset threshold; 3.
  • a determination module configured to determine fixation information according to the reference fixation point coordinates and correction parameters.
  • the control device of the present application includes an acquisition module, a first determination module, a second determination module and a control module.
  • the acquisition module is used to determine the pose information of the face according to the face information, and determine the reference gaze point coordinates according to the face information;
  • the first determination module is used to determine the correction parameters according to the pose information in response to the pose information being greater than a preset threshold;
  • the second determination is used to determine the gaze information according to the reference gaze point coordinates and the correction parameters;
  • the control module is used to control the electronic equipment according to the gaze information.
  • the electronic device of the present application includes a processor, the processor is used to determine the pose information of the face according to the face information, and determine the coordinates of the reference gaze point according to the face information; in response to the pose information being greater than a preset threshold, determine the correction parameters according to the pose information; The reference fixation point coordinates and correction parameters determine the fixation information.
  • the electronic device of the present application includes a processor, the processor is used to determine the pose information of the face according to the face information, and determine the coordinates of the reference gaze point according to the face information; in response to the pose information being greater than a preset threshold, determine the correction parameters according to the pose information; The reference gaze point coordinates and correction parameters are used to determine gaze information; and to control electronic equipment according to the gaze information.
  • the non-transitory computer-readable storage medium of the present application includes a computer program.
  • the processor executes the gaze detection method of any of the above-mentioned embodiments, or the control method of the electronic device of any of the above-mentioned embodiments. .
  • the gaze detection method of the embodiment of the present application includes the following steps:
  • 011 Determine the posture information of the face according to the face information, and determine the reference gaze point coordinates according to the face information;
  • 015 Determine the fixation information according to the reference fixation point coordinates and correction parameters.
  • the detection device 10 in the embodiment of the present application includes a first determination module 11 , a second determination module 12 and a third determination module 13 .
  • the first determination module 11 is used to determine the pose information of the face according to the face information, and determines the coordinates of the reference gaze point according to the face information;
  • the second determination module 12 is used to determine the correction parameters according to the pose information in response to the pose information being greater than a preset threshold ;
  • the third determination module 13 is used to determine the gaze information according to the coordinates of the reference gaze point and the correction parameters. That is to say, step 011 can be implemented by the first determination module 11 , step 013 can be performed by the second determination module 12 and step 015 can be performed by the third determination module 13 .
  • the electronic device 100 in the embodiment of the present application includes a processor 60 and a collection device 30 .
  • Acquisition device 30 is used for collecting face information by predetermined frame rate (face information can comprise people's face image, as the visible light image of people's face, infrared image, depth image etc.);
  • Acquisition device 30 can be visible light camera, infrared camera, depth One or more of the cameras, wherein the visible light camera can collect visible light face images, the infrared camera can collect infrared face images, and the depth camera can collect depth face images.
  • the collection device 30 includes a visible light camera, An infrared camera and a depth camera, and the acquisition device 30 simultaneously has a visible light face image, an infrared face image and a depth face image.
  • the processor 60 may include an image processor 60 (Image Signal Processor, ISP), a neural network processor 60 (Neural-Network Processing Unit, NPU) and an application processor 60 (Application Processor, AP), and the detection device 10 is arranged on the electronic device In 100, wherein, the first determination module 11 can be arranged on the ISP and the NPU, and the processor 60 is connected to the collection device 30. After the collection device 30 collects the face image, the ISP can process the face image to obtain the face image. The NPU can determine the reference gaze point coordinates according to the face information, and the second determination module 12 and the third determination module 13 can be set on the NPU.
  • ISP Image Signal Processor
  • NPU Neuro-Network Processing Unit
  • AP Application Processor
  • the processor 60 (specifically, it can be an ISP and an NPU) is used to determine the pose information of the face according to the face information; the processor 60 (specifically, it can be an NPU) is also used to determine the correction according to the pose information in response to the pose information being greater than a preset threshold. Parameters, and according to the coordinates of the reference point of fixation and the correction parameters, fixation information is determined. That is to say, step 011 , step 013 and step 015 may be executed by the processor 60 .
  • the electronic device 100 may be a mobile phone, a smart watch, a tablet computer, a display device, a notebook computer, a teller machine, a gate, a head-mounted display device, a game machine, and the like. As shown in FIG. 3 , the embodiment of the present application is described by taking the electronic device 100 as a mobile phone as an example. It can be understood that the specific form of the electronic device 100 is not limited to the mobile phone.
  • the collection device 30 can collect the user's face information once at a predetermined time interval, and continue to perform gaze detection on the user while ensuring that the power consumption of the electronic device 100 is small, or, when the user When using applications that require gaze detection (such as browser software, post bar software, video software, etc.), collect face information according to a predetermined number of frames (such as 10 frames per second), so that human face information is only performed when there is a gaze detection requirement. Face information collection minimizes the power consumption of gaze detection.
  • applications that require gaze detection such as browser software, post bar software, video software, etc.
  • the processor 60 can identify the face image, for example, the processor 60 can compare the face image with the preset face template , so as to determine the face in the face image and the image area where different parts of the face (such as eyes, nose, etc.) Within, the processor 60 can perform face recognition in a trusted execution environment (Trusted Execution Environment, TEE) to ensure the user's privacy; or, the preset face template can be stored in the cloud server 200, and then the electronic device 100 will The face image is sent to the cloud server 200 for comparison to determine the face area image, and the face recognition is handed over to the cloud server 200 for processing, which can reduce the processing capacity of the electronic device 100 and improve image processing efficiency; then, the processor 60 can The image of the face area is recognized to determine the pose information of the face. More specifically, the recognition of the face and different parts of the face can be carried out according to the shape features of the face and different parts of the face, so as to obtain the face information. image of the
  • the pose information of the face can be calculated according to the face information.
  • the pose information can be calculated by extracting the features of the face image and calculating the pose information according to the position coordinates of the extracted feature points, such as nose tip, left and right
  • the center of the eyes and the left and right corners of the mouth are used as feature points, and as the posture of the face changes, the position coordinates of the feature points are also changing. For example, a three-dimensional coordinate system is established with the tip of the nose as the origin.
  • the inclination angle respectively represent the rotation angles of the human face relative to the three coordinate axes of the three-dimensional coordinate system, etc.), taking the horizontal rotation angle of the human face when facing the display screen 40 of the electronic device 100 as an example, the deflection angle of the human face (i.e. The larger the horizontal rotation angle), the closer the distance between the two feature points corresponding to the left and right eyes. Therefore, the pose information of the face can be accurately calculated according to the position coordinates of the feature points.
  • the correction parameters can be determined according to the posture information, thereby correcting the error of the gaze point detection caused by the change of posture, so that the gaze obtained according to the coordinates of the reference point of gaze and the correction parameters The information is more accurate.
  • the processor 60 can directly calculate the reference gaze point coordinates according to the face area image, or carry out feature point recognition on the face area image, and calculate the reference gaze point coordinates by the feature points, and the amount of calculation is relatively small. or, the processor 60 can obtain the face area image and the human eye area image, and perform feature point recognition on the face area image, and jointly calculate the reference gaze point coordinates in conjunction with the feature points of the human face area image and the human eye area image , on the basis of ensuring a small amount of calculation, the calculation accuracy of the coordinates of the reference gaze point is further improved.
  • the processor 60 can first determine whether the posture information is greater than a preset threshold, and the posture information Including the pitch angle, roll angle and yaw angle of the human face, of course, because the change of the roll angle (the human face parallel display screen 40 rotates) does not cause the feature points of the human face to change in the position of the human face, so , you can only judge whether the pitch angle or yaw angle is greater than 0 degrees.
  • the pitch angle, roll angle and yaw angle of the face are all 0 degrees, and the preset threshold is 0 degrees, then when the posture information is greater than 0 degrees (such as When the pitch angle or yaw angle is greater than 0 degrees), it can be determined that the coordinates of the reference gaze point need to be corrected.
  • the pitch angle, roll angle, and yaw angle are directional, they may be negative values, which will affect the accuracy of judgment Therefore, when judging whether the posture information is greater than a preset threshold, it can be judged whether the absolute value of the posture information is greater than a preset threshold.
  • the processor 60 calculates the correction parameters according to the posture information.
  • the correction parameters include coordinate correction coefficients, which can be obtained according to the reference gaze point coordinates and the coordinate correction coefficients.
  • Watching information such as the coordinates of the reference point of gaze are (x, y), and the coordinate correction coefficients are a and b, then the gaze information is (ax, by); or, the coordinates of the reference point of gaze include the two-dimensional coordinates and the coordinates of the line of sight on the display screen 40
  • the correction parameters include coordinate correction coefficient and direction correction coefficient.
  • the gaze information can be obtained according to the coordinates of the reference gaze point and the coordinate correction coefficient.
  • the coordinates of the reference gaze point are (x, y), and the direction of the line of sight is ( ⁇ , ⁇ , ⁇ ), the coordinate correction coefficients are a and b, and the direction correction coefficients include c, d and e, then the gaze information is (ax, by, c ⁇ , d ⁇ , e ⁇ ).
  • the attitude information is less than or equal to the preset threshold value, it means that the user is facing the display screen 40 or the deflection angle of the user relative to the display screen 40 is small at this time, then it can be determined that the coordinates of the reference gaze point do not need to be corrected, and the processor 60 After the coordinates of the reference fixation point are calculated, the coordinates of the reference fixation point can be directly determined as the final fixation information, thereby saving the calculation amount for calculating the correction parameters.
  • the preset threshold can be set larger. For example, when the preset threshold is 5 degrees, since the deflection of the face is small, the detection accuracy of the gaze information is basically not affected at this time. Calculation of correction parameters. Or, set the preset threshold according to the needs of the gaze information, such as the gaze information only includes the gaze direction, and does not need accurate gaze point coordinates. At this time, the preset threshold can be set larger, and the gaze information includes the gaze point at coordinates of the display screen 40, the preset threshold can be set smaller, so as to ensure the accuracy of gaze point detection.
  • the electronic device 100 can be controlled according to the gaze information (the gaze direction and/or the coordinates of the gaze point). For example, when it is detected that the gaze point coordinates are located in the display area of the display screen 40, keep the screen always on, and after detecting that the gaze point coordinates are located outside the display area of the display screen 40 for a predetermined duration (such as 10S, 20S, etc.), to turn off the screen. Or, according to the change of the gaze direction, operations such as turning pages are performed.
  • a predetermined duration such as 10S, 20S, etc.
  • the obtained face information may not be accurate enough due to factors such as shooting angles, thereby affecting the accuracy of gaze point detection.
  • the gaze detection method, the detection device 10 and the electronic device 100 of the present application after obtaining the face information, first calculate the face posture through the face information, and if the posture information is greater than the preset threshold, it will affect the calculation of the gaze point coordinates For accuracy, first calculate the coordinates of the reference point of gaze based on the face information, and then calculate the correction parameters based on the attitude information, so that the coordinates of the reference point of gaze can be corrected according to the correction parameters, thereby preventing the shooting angle of the face from being captured due to the face information. Excessive impact on gaze detection can improve the accuracy of gaze detection.
  • the face information includes a face mask, a left-eye image and a right-eye image, and the face mask is used to indicate the position of the face in the image
  • Step 011 Calculate the reference gaze point coordinates according to the face information, including:
  • 0111 Calculate the position information of the face relative to the electronic device 100 according to the face mask
  • 0112 Calculate the coordinates of the reference gaze point according to the position information, the left-eye image and the right-eye image.
  • the first determining module 11 is configured to calculate the position information of the face relative to the electronic device 100 according to the face mask; and calculate the reference gaze point coordinates according to the position information, the left-eye image and the right-eye image. That is to say, step 0111 and step 0112 can be executed by the first determination module 11 .
  • the processor 60 is further configured to calculate the position information of the face relative to the electronic device 100 according to the face mask; and calculate the reference gaze point coordinates according to the position information, the left-eye image and the right-eye image. That is to say, step 0111 and step 0112 may be executed by the processor 60 .
  • the processor 60 can also first determine the face mask of the face image, the face mask is used to represent the position of the face in the face image, and the face mask can be obtained by The position of the face in the recognition face image is determined, and the processor 60 can calculate the position information of the face relative to the electronic device 100 according to the face mask (for example, according to the ratio of the face mask to the face image, the face and face can be calculated).
  • distance of the electronic device 100 it can be understood that when the distance between the human face and the electronic device 100 changes, even if the gaze direction of the human eye does not change, the gaze point coordinates of the human eye will still change. Therefore, when calculating the gaze information, except In addition to calculating gaze information based on face images and/or eye images (such as left-eye images and right-eye images), location information can also be combined to more accurately calculate gaze point coordinates.
  • face information includes face feature points
  • attitude information includes attitude angle and three-dimensional coordinate offset
  • correction parameters include rotation matrix and translation matrix
  • step 011 Determine the posture information of the face according to the face information, including:
  • Step 013 Calculate the correction parameters according to the attitude information, including
  • 0131 Calculate the rotation matrix according to the attitude angle, and calculate the translation matrix according to the three-dimensional coordinate offset.
  • the first determination module 11 is also used to calculate the attitude angle and three-dimensional coordinate offset according to the facial feature points; the second determination module 12 is also used to calculate the rotation matrix according to the attitude angle, and calculate the three-dimensional coordinate offset according to the Compute the translation matrix. That is to say, step 0113 can be performed by the first determination module 11 , and step 0131 can be performed by the second determination module 12 .
  • the processor 60 is further configured to calculate an attitude angle and a three-dimensional coordinate offset based on facial feature points; calculate a rotation matrix based on the attitude angle, and calculate a translation matrix based on the three-dimensional coordinate offset. That is to say, step 0133 and step 0134 can be executed by the processor 60 .
  • the correction parameters may include a rotation matrix and a translation matrix to represent the face position change and pose change respectively.
  • the pose angle and the three-dimensional coordinate offset may be calculated first according to the face feature points, where the pose angle It is used to represent the attitude of the face (such as pitch angle, roll angle and yaw angle), and the three-dimensional coordinate offset can represent the position of the face, and then calculate the rotation matrix according to the attitude angle, and calculate the offset matrix according to the three-dimensional coordinate offset , so as to determine the correction parameters of the reference gaze point coordinates, and accurately calculate the fixation information according to the reference gaze point coordinates, rotation matrix and translation matrix.
  • gaze detection method also includes:
  • the training sample set includes the first type of samples whose face pose information is less than a preset threshold and the second type of samples whose face pose information is greater than a preset threshold;
  • 0102 Train a preset detection model according to the first type of samples and the second type of samples;
  • Step 013 includes:
  • the detection device 10 further includes an acquisition module 14 and a training module 15 . Both the acquiring module 14 and the training module 15 can be set in the NPU to train the detection model.
  • the acquisition module 14 is used to obtain the training sample set;
  • the training module 16 is used to train the preset detection model according to the first type of sample and the second type of sample;
  • the second determination module 12 is also used to determine the correction parameters according to the posture information based on the detection model . That is to say, step 0101 may be performed by the acquisition module 14 , step 0102 may be performed by the training module 15 , and step 0132 may be performed by the second determination module 12 .
  • the processor 60 is further configured to obtain a training sample set; train a preset detection model according to the first type of samples and the second type of samples; and determine correction parameters according to the posture information based on the detection model. That is to say, step 0101 , step 0102 and step 0132 can be executed by the processor 60 .
  • the present application can realize calculation of gaze information through a preset detection model.
  • it is necessary to first train the detection model so that the detection model converges.
  • the detection model in order to enable the detection model to still accurately calculate the gaze information when the face is deflected relative to the display screen 40, it is possible to pre-select a plurality of first-type samples whose posture information of the face is less than a preset threshold and A plurality of second-type samples whose pose information of the face is greater than a preset threshold are used as a training sample set; wherein, the first-type samples are face images whose pose information is less than a preset threshold; the second-type samples are faces whose pose information is greater than a preset A face image with a threshold; in this way, the detection model is trained through the first type of samples whose attitude information is less than the preset threshold and the second type of samples greater than the preset threshold, and after training to convergence, the detection model can be When the gaze information is detected, the impact caused by the deflection of the human face relative to the display screen 40 is minimized to ensure the accuracy of gaze detection.
  • step 0102 includes:
  • 01021 Input the first type of samples into the fixation point detection module to output the first training coordinates
  • 01022 Input the second type of samples into the gaze point detection module and the correction module to output the second training coordinates;
  • 01023 Based on the preset loss function, calculate the first loss value according to the first preset coordinates corresponding to the first type of samples and the first training coordinates, and calculate the first loss value according to the second preset coordinates corresponding to the second type of samples and the second training coordinates Coordinates, calculate the second loss value;
  • 01024 Adjust the detection model according to the first loss value and the second loss value until the detection model converges.
  • the training module 15 is also used to input the first type of sample into the gaze point detection module to output the first training coordinates; the second type of sample is input into the gaze point detection module and the correction module to output the second training coordinates; based on the preset loss function, calculate the first loss value according to the first preset coordinates and the first training coordinates corresponding to the first type of samples, and calculate the first loss value according to the second preset coordinates corresponding to the second type of samples and the second training coordinates coordinates, calculating the second loss value; adjusting the detection model according to the first loss value and the second loss value until the detection model converges. That is to say, Step 01021 to Step 01024 can be executed by the training module 15 .
  • the processor 60 is also used for inputting the first type of samples into the gaze point detection module to output the first training coordinates; inputting the second type of samples into the gaze point detection module and the correction module to output the second training coordinates coordinates; based on the preset loss function, calculate the first loss value according to the first preset coordinates and the first training coordinates corresponding to the first type of samples, and calculate the first loss value according to the second preset coordinates corresponding to the second type of samples and the second training coordinates coordinates, calculating the second loss value; adjusting the detection model according to the first loss value and the second loss value until the detection model converges. That is to say, Step 01021 to Step 01024 may be executed by the processor 60 .
  • the face detection model 50 includes a gaze point detection module 51 and a correction module 52 .
  • the training sample set is input to the detection model, wherein the first type of sample is input to the gaze point detection module 51 to output the first training coordinates; since the attitude information of the first type of training sample is less than the preset threshold, therefore Directly output the first training coordinates; the second type of training samples are input to the fixation point detection module 51 and the correction module 52 at the same time, the fixation point detection module 51 outputs the reference training coordinates, and then the correction module 52 outputs the correction parameters, and corrects the reference according to the correction parameters training coordinates to output the second training coordinates.
  • each training sample has a corresponding preset coordinate
  • the preset coordinate represents the actual gaze information of the training sample
  • the first type of training sample corresponds to the first preset coordinate
  • the second type of preset sample corresponds to the second preset coordinate
  • the processor 60 can calculate the first loss value based on the preset loss function, the first training coordinates and the first preset coordinates; then the processor 60 adjusts the gaze point detection module 51 based on the first loss value , so that the first training coordinates output by the gaze point detection module 51 gradually approach the first preset coordinates until convergence; the processor 60 can calculate the second based on the preset loss function, the second training coordinates and the second preset coordinates Two loss values; then the processor 60 simultaneously adjusts the gaze point detection module 51 and the correction module 52 based on the second loss value, so that the second training coordinates output by the detection model gradually approach the second preset coordinates until convergence.
  • the loss function is as follows: Among them, loss is the loss value, N is the number of training samples contained in each training sample set, X and Y are training coordinates (such as the first training coordinates or the second training coordinates), and Gx and Gy are preset coordinates (such as the first training coordinates or the second training coordinates). a preset coordinate and a second preset coordinate), when the training coordinates are the gaze direction, X and Y represent the pitch angle and the yaw angle respectively; when the training coordinates are the gaze point coordinates, X and Y represent the gaze point respectively The coordinates of the plane where the screen 40 is located, so as to quickly calculate the first loss value and the second loss value.
  • the processor 60 can adjust the detection model according to the first loss value and the second loss value, so that the gradient of the detection model decreases continuously, so that the training coordinates are getting closer to the preset coordinates, and finally the detection model is trained to convergence.
  • the first difference of the first loss value corresponding to any two samples of the first type, and the first difference of the second loss value corresponding to any two samples of the second type When the two differences are both less than the predetermined difference threshold, it is determined that the detection model is converged, and N is a positive integer greater than 1; that is to say, during the training process of consecutive N batches, the first loss value basically does not change, which means When the first loss value and the second loss value reach the limit, it can be determined that the detection model has converged; or, when both the first loss value and the second loss value are less than a predetermined loss threshold, it is determined that the detection model is converged, and the first loss value and the second loss value are determined to be convergent.
  • the detection model is trained to converge through the first type of training samples and the second type of training samples, so as to ensure that the detection model can still output accurate gaze information according to the face information when the face is deflected.
  • control method of the electronic device 100 in the embodiment of the present application includes the following steps:
  • 021 Determine the posture information of the face according to the face information, and determine the reference gaze point coordinates according to the face information;
  • the control device 20 in the embodiment of the present application includes an acquisition module 21 , a first determination module 22 , a second determination module 23 and a control module 24 .
  • Acquisition module 21 is used for determining the posture information of people's face according to people's face information, determines the coordinates of reference point of gaze according to people's face information;
  • the first determining module 22 is used for determining correction parameters according to posture information in response to posture information being greater than a preset threshold;
  • the second determination module 23 is used to determine the gaze information according to the coordinates of the reference gaze point and the correction parameters;
  • the control module 24 is used to control the electronic device 100 according to the gaze information. That is to say, step 021 can be performed by the acquisition module 21 , step 023 can be performed by the first determination module 22 , step 025 can be performed by the second determination module 23 and step 027 can be performed by the control module 24 .
  • the electronic device 100 in the embodiment of the present application includes a processor 60 and a collection device 30 .
  • Acquisition device 30 is used for collecting face information by predetermined frame rate (face information comprises people's face image, as the visible light image of people's face, infrared image, depth image etc.);
  • Acquisition device 30 can be visible light camera, infrared camera, depth camera One or more of them, wherein the visible light camera can collect visible light face images, the infrared camera can collect infrared face images, and the depth camera can collect depth face images.
  • the collection device 30 includes a visible light camera, an infrared face image camera and depth camera, the acquisition device 30 simultaneously visible light face image, infrared face image and depth face image.
  • Processor 60 may include ISP, NPU and AP, such as control device 20 is arranged in electronic equipment 100, acquisition module 21 is arranged in ISP and NPU, processor 60 is connected with collection device 30, after collection device 30 collects the face image , the ISP can process the face image to determine the posture information of the face according to the face information, the NPU can determine the reference point of gaze coordinates according to the face information, the first determination module 22 and the second determination module 23 can be arranged on the NPU, The control module 24 can be set at the AP.
  • ISP can process the face image to determine the posture information of the face according to the face information
  • the NPU can determine the reference point of gaze coordinates according to the face information
  • the first determination module 22 and the second determination module 23 can be arranged on the NPU
  • the control module 24 can be set at the AP.
  • Processor 60 (specifically can be ISP and NPU) is used for obtaining face information and posture information; Processor 60 (specifically can be NPU) is also used for determining correction parameter according to posture information in response to posture information greater than preset threshold value; According to The coordinates of the reference gaze point and the correction parameters are used to determine gaze information; the processor 60 (specifically, it may be an AP) can also be used to control the electronic device 100 according to the gaze information. That is to say, step 021 can be executed by the collection device 30 in cooperation with the processor 60 , and step 023 , step 025 and step 027 can be executed by the processor 60 .
  • Step 021, Step 023, and Step 025 for the manner of determining the gaze information, that is, Step 021, Step 023, and Step 025, please refer to the descriptions of Step 011, Step 013, and Step 015, respectively, and details will not be repeated here.
  • the electronic device 100 can be controlled according to the gaze direction and gaze point coordinates.
  • a three-dimensional coordinate system is established with the midpoint of the eyes as the origin O1, the X1 axis is parallel to the direction of the line connecting the centers of the eyes, the Y1 axis is located on the horizontal plane and perpendicular to the X1 axis, and the Z1 axis is perpendicular to the X1 axis and Y1 axis.
  • the three-axis rotation angle of the line of sight S and the three-dimensional coordinate system indicates the user's gaze direction.
  • the gaze direction includes pitch angle, roll angle and yaw angle respectively.
  • the pitch angle represents the rotation angle around the X1 axis
  • the roll angle represents the rotation angle around the Y1 axis.
  • the rotation angle of the axis, the yaw angle represents the rotation angle around the Z1 axis
  • the processor 60 can realize the page turning or sliding operation of the display content of the electronic device 100 according to the gaze direction, for example, according to the determination of continuous multiple frames of human eye area images (such as 10 consecutive frames) of the gaze direction, the change of the gaze direction can be determined, for example, please combine Figure 11 and Figure 12, when the pitch angle gradually increases (that is, the line of sight S is tilted), it can be determined that the user wants the displayed content to slide up or Turn the page down. For another example, please refer to FIG. 11 and FIG.
  • the pitch angle gradually decreases (that is, the line of sight S is tilted), then it can be determined that the user wants to slide the displayed content down or turn the page up.
  • the electronic device 100 can also be slid or page-turned.
  • the center of the display screen 40 can be used as the coordinate origin O2 to establish a plane coordinate system
  • the width direction parallel to the electronic device 100 is used as the X2 axis
  • the length direction parallel to the electronic device 100 is used as the Y2 axis
  • the gaze point coordinates include the abscissa (corresponding to the position on the X2 axis) and the ordinate (corresponding to the position on the Y2 axis).
  • the ordinate gradually increases, it means that the gaze point M moves up. It can be determined that the user wants to slide up or turn the page down, and then For example, if the ordinate gradually decreases, it means that the gaze point M moves down, and it can be determined that the user wants to slide the displayed content down or turn the page up.
  • the processor 60 can also obtain 10 consecutive frames according to the change speed of the gaze direction (such as the difference between the pitch angles of the first frame and the tenth frame (or the difference between the vertical coordinates of the gaze point M) and The duration is determined), the faster the change speed, the more new display content will be displayed after sliding.
  • the predetermined time length (such as 10S, 20S, etc.) after the user does not check the display screen 40 can be Turn off the screen again.
  • the control device 20 and the electronic device 100 after obtaining the face information and posture information, when the posture information is greater than the preset threshold, which will affect the calculation accuracy of the gaze point coordinates, according to
  • the face information first calculates the coordinates of the reference point of gaze, and then calculates the correction parameters according to the posture information.
  • the coordinates of the reference point of gaze can be corrected according to the correction parameters to obtain accurate gaze information, so as to prevent the acquired face information from being shot at an excessive angle. Large impact on gaze detection, which can improve the accuracy of gaze detection.
  • the control accuracy of the electronic device 100 can be improved.
  • the face information includes a face mask, a left-eye image and a right-eye image, and the face mask is used to indicate the position of the face in the image
  • Step 021 Calculate the reference gaze point coordinates according to the face information, including:
  • 0212 Calculate the coordinates of the reference gaze point according to the position information, the left-eye image and the right-eye image.
  • the first determining module 22 is also used to calculate the position information of the face relative to the electronic device 100 according to the face mask; and calculate the reference gaze point coordinates according to the position information, the left-eye image and the right-eye image. That is to say, step 0211 and step 0212 can be executed by the first determination module 22 .
  • the processor 60 is further configured to calculate the position information of the face relative to the electronic device 100 according to the face mask; and calculate the reference gaze point coordinates according to the position information, the left-eye image and the right-eye image. That is to say, step 0211 and step 0212 can be executed by the processor 60 .
  • step 0231 and step 0232 please refer to step 0131 and step 0132 respectively, and details are not repeated here.
  • face information includes face feature points
  • attitude information includes attitude angle and three-dimensional coordinate offset
  • correction parameters include rotation matrix and translation matrix
  • step 021 Determine the pose information of the face based on the face information, including:
  • Step 023 includes:
  • 0231 Calculate the rotation matrix according to the attitude angle, and calculate the translation matrix according to the three-dimensional coordinate offset.
  • the first determination module 22 is also used to calculate the attitude angle and three-dimensional coordinate offset according to the face feature points; calculate the rotation matrix according to the attitude angle, and calculate the translation matrix according to the three-dimensional coordinate offset. That is to say, step 0233 and step 0234 can be executed by the first determination module 22 .
  • the processor 60 is further configured to calculate an attitude angle and a three-dimensional coordinate offset based on facial feature points; calculate a rotation matrix based on the attitude angle, and calculate a translation matrix based on the three-dimensional coordinate offset. That is to say, step 0233 and step 0234 can be executed by the processor 60 .
  • Step 0233 and Step 0234 please refer to Step 0133 and Step 0134 respectively, which will not be repeated here.
  • the gaze information includes gaze point coordinates
  • the control method also includes:
  • Step 027 Control the electronic device 100 according to the gaze information, including:
  • control module 24 is also used to acquire the photographed image within the first predetermined time period before the screen is off; in response to the fact that the photographed image contains a human face; , the screen remains on for a second predetermined duration. That is to say, step 0201 , step 0202 and step 0271 can be executed by the control module 24 .
  • the processor 60 is further configured to acquire a photographed image within the first predetermined time period before the screen is off; in response to the fact that the photographed image contains a human face, in response to the gaze point coordinates being located in the display area of the display screen 40 , the screen remains on for a second predetermined duration. That is to say, step 0201 , step 0202 and step 0271 may be executed by the processor 60 .
  • the gazing information can be used to realize off-screen control.
  • gaze detection is first performed.
  • the processor 60 first acquires a captured image. If there is a human face in the captured image, the gazing information is determined according to the captured image.
  • the first predetermined time period such as 5 seconds, 10 seconds, etc.
  • the gaze point M when the gaze point M is located within the display area of the display screen 40, it can be determined that the user is looking at the display screen 40, so that the display screen 40 continues to be on for a second predetermined duration, and the second predetermined duration can be greater than the first predetermined time length, and within the first predetermined time length before the screen is turned off again, the captured image is acquired again, so that when the user looks at the display screen 40, the screen remains bright, and when the user no longer looks at the display screen 40, the screen is turned off again. Screen.
  • the center of the display area can be used as the coordinate origin O2 to establish a two-dimensional coordinate system parallel to the display screen 40.
  • the coordinate range and the ordinate range can be determined when the gaze point coordinates are within the preset coordinate range (that is, the abscissa of the gaze point coordinates is within the abscissa range and the ordinate is within the ordinate range).
  • the gaze point coordinates are located in the display area, so it is relatively simple to determine whether the user gazes at the display screen 40 .
  • the gaze information includes gaze point coordinates
  • the control method also includes:
  • Step 027 includes:
  • control module 24 is also used to acquire the captured image; in response to the situation that the electronic device 100 does not receive an input operation; in response to the captured image containing a human face and the coordinates of the point of gaze located in the display area, adjust the display screen 40 Adjust the display brightness to the first predetermined brightness; in response to the captured image does not contain a human face, or the captured image contains a human face and the gaze point coordinates are outside the display area, adjust the display brightness to the second predetermined brightness, and the second predetermined brightness is less than A first predetermined brightness. That is to say, step 0203 , step 0204 , step 0272 and step 0273 can be executed by the control module 24 .
  • the processor 60 is also configured to acquire a captured image; in response to a situation where the electronic device 100 does not receive an input operation; in response to a human face being included in the captured image and the gaze point coordinates are located in the display area, adjust the display screen 40 Adjust the display brightness to the first predetermined brightness; in response to the captured image does not contain a human face, or the captured image contains a human face and the gaze point coordinates are outside the display area, adjust the display brightness to the second predetermined brightness, and the second predetermined brightness is less than A first predetermined brightness. That is to say, step 0203 , step 0204 , step 0272 and step 0273 can be executed by the processor 60 .
  • the gaze information can also be used to realize intelligent brightening of the screen.
  • the electronic device 100 In order to save power, the electronic device 100 generally reduces the display brightness after a certain period of time when the screen is bright, and then brightens the screen with low brightness for a certain period of time. After a long time, the screen will be off.
  • the processor 60 can obtain the captured image. If the image contains a human face, the gaze information is calculated according to the captured image.
  • the display brightness is adjusted to the first predetermined brightness.
  • the predetermined brightness can be the brightness set by the user when the display screen 40 is normally displayed, or it can be changed in real time according to the brightness of the ambient light to adapt to the brightness of the ambient light, so as to ensure that the user can still brighten the screen even if the electronic device 100 is not operated. In order to prevent the situation that the user does not operate the electronic device 100 but suddenly turns off the screen when viewing the displayed content and affects the user experience.
  • the display brightness can be adjusted to a second predetermined brightness, which is smaller than the first predetermined brightness, so as to prevent unnecessary power consumption.
  • the display brightness is adjusted to the first predetermined brightness again, so as to ensure the normal viewing experience of the user. In this way, it can be realized that when the user does not operate the electronic device 100, the user looks at the display area, and the display area is displayed at normal brightness; Save battery.
  • one or more non-transitory computer-readable storage media 300 containing a computer program 302 when the computer program 302 is executed by one or more processors 60, the processors 60 can Execute the gaze detection method or the control method of the electronic device 100 in any one of the above-mentioned embodiments.
  • processors 60 when the computer program 302 is executed by one or more processors 60, the processors 60 are made to perform the following steps:
  • 011 Determine the posture information of the face according to the face information, and determine the reference gaze point coordinates according to the face information;
  • 015 Determine the fixation information according to the reference fixation point coordinates and correction parameters.
  • processors 60 may also perform the following steps:
  • 021 Determine the posture information of the face according to the face information, and determine the reference gaze point coordinates according to the face information;

Abstract

L'invention concerne un procédé de détection du regard, un procédé de commande pour un dispositif électronique (100), et un appareil de détection (10), un appareil de commande (20), un dispositif électronique (100) et un support de stockage lisible par ordinateur non volatil (300). Le procédé de détection de regard comprend les étapes consistant à : déterminer des informations de posture d'un visage humain selon des informations faciales, et déterminer des coordonnées de point de regard de référence selon les informations faciales (011) ; en réponse aux informations de posture qui sont supérieures à un seuil prédéfini, déterminer un paramètre de correction selon les informations de posture (013) ; et déterminer des informations de regard selon les coordonnées de point de regard de référence et le paramètre de correction (015).
PCT/CN2022/126148 2021-10-29 2022-10-19 Procédé de détection de regard, procédé de commande pour dispositif électronique et dispositifs associés WO2023071884A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111271397.4 2021-10-29
CN202111271397.4A CN113936324A (zh) 2021-10-29 2021-10-29 注视检测方法、电子设备的控制方法及相关设备

Publications (1)

Publication Number Publication Date
WO2023071884A1 true WO2023071884A1 (fr) 2023-05-04

Family

ID=79285003

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/126148 WO2023071884A1 (fr) 2021-10-29 2022-10-19 Procédé de détection de regard, procédé de commande pour dispositif électronique et dispositifs associés

Country Status (2)

Country Link
CN (1) CN113936324A (fr)
WO (1) WO2023071884A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117495937A (zh) * 2023-12-25 2024-02-02 荣耀终端有限公司 人脸图像处理的方法及电子设备
CN117495937B (zh) * 2023-12-25 2024-05-10 荣耀终端有限公司 人脸图像处理的方法及电子设备

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113936324A (zh) * 2021-10-29 2022-01-14 Oppo广东移动通信有限公司 注视检测方法、电子设备的控制方法及相关设备
CN116052261A (zh) * 2022-05-31 2023-05-02 荣耀终端有限公司 视线估计方法及电子设备
CN116052235B (zh) * 2022-05-31 2023-10-20 荣耀终端有限公司 注视点估计方法及电子设备
CN116030512B (zh) * 2022-08-04 2023-10-31 荣耀终端有限公司 注视点检测方法及装置
CN115509351B (zh) * 2022-09-16 2023-04-07 上海仙视电子科技有限公司 一种感官联动情景式数码相框交互方法与系统
CN117133043A (zh) * 2023-03-31 2023-11-28 荣耀终端有限公司 注视点估计方法、电子设备及计算机可读存储介质
CN116737051B (zh) * 2023-08-16 2023-11-24 北京航空航天大学 基于触控屏的视触结合交互方法、装置、设备和可读介质
CN117351074A (zh) * 2023-08-31 2024-01-05 中国科学院软件研究所 基于头戴式眼动仪和深度相机的视点位置检测方法及装置
CN117891352A (zh) * 2024-03-14 2024-04-16 南京市文化投资控股集团有限责任公司 一种基于元宇宙的文旅内容推荐系统及方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160217318A1 (en) * 2013-08-29 2016-07-28 Nec Corporation Image processing device, image processing method, and program
CN109993029A (zh) * 2017-12-29 2019-07-09 上海聚虹光电科技有限公司 注视点模型初始化方法
CN112232128A (zh) * 2020-09-14 2021-01-15 南京理工大学 基于视线追踪的老年残障人士照护需求识别方法
CN112509007A (zh) * 2020-12-14 2021-03-16 科大讯飞股份有限公司 真实注视点定位方法以及头戴式视线跟踪系统
CN113544626A (zh) * 2019-03-15 2021-10-22 索尼集团公司 信息处理装置、信息处理方法和计算机可读记录介质
CN113936324A (zh) * 2021-10-29 2022-01-14 Oppo广东移动通信有限公司 注视检测方法、电子设备的控制方法及相关设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160217318A1 (en) * 2013-08-29 2016-07-28 Nec Corporation Image processing device, image processing method, and program
CN109993029A (zh) * 2017-12-29 2019-07-09 上海聚虹光电科技有限公司 注视点模型初始化方法
CN113544626A (zh) * 2019-03-15 2021-10-22 索尼集团公司 信息处理装置、信息处理方法和计算机可读记录介质
CN112232128A (zh) * 2020-09-14 2021-01-15 南京理工大学 基于视线追踪的老年残障人士照护需求识别方法
CN112509007A (zh) * 2020-12-14 2021-03-16 科大讯飞股份有限公司 真实注视点定位方法以及头戴式视线跟踪系统
CN113936324A (zh) * 2021-10-29 2022-01-14 Oppo广东移动通信有限公司 注视检测方法、电子设备的控制方法及相关设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117495937A (zh) * 2023-12-25 2024-02-02 荣耀终端有限公司 人脸图像处理的方法及电子设备
CN117495937B (zh) * 2023-12-25 2024-05-10 荣耀终端有限公司 人脸图像处理的方法及电子设备

Also Published As

Publication number Publication date
CN113936324A (zh) 2022-01-14

Similar Documents

Publication Publication Date Title
WO2023071884A1 (fr) Procédé de détection de regard, procédé de commande pour dispositif électronique et dispositifs associés
US9373156B2 (en) Method for controlling rotation of screen picture of terminal, and terminal
WO2023071882A1 (fr) Procédé de détection de regard humain, procédé de commande et dispositif associé
US9696859B1 (en) Detecting tap-based user input on a mobile device based on motion sensor data
TWI704501B (zh) 可由頭部操控的電子裝置與其操作方法
US9740281B2 (en) Human-machine interaction method and apparatus
CN100343867C (zh) 一种判别视线方向的方法和装置
US10489912B1 (en) Automated rectification of stereo cameras
US20220301218A1 (en) Head pose estimation from local eye region
CN104679225B (zh) 移动终端的屏幕调节方法、屏幕调节装置及移动终端
TWI631506B (zh) 螢幕旋轉控制方法及系統
CN109375765B (zh) 眼球追踪交互方法和装置
CN104574321A (zh) 图像修正方法、图像修正装置和视频系统
WO2020042542A1 (fr) Procédé et appareil d'acquisition de données d'étalonnage de commande de mouvement oculaire
US10866492B2 (en) Method and system for controlling tracking photographing of stabilizer
WO2012137801A1 (fr) Dispositif d'entrée, procédé d'entrée et programme informatique
WO2020019504A1 (fr) Procédé de déverouillage d'écran de robot, appareil, dispositif intelligent et support de stockage
CN104122983A (zh) 一种屏幕显示方向调整方法及装置
US20210118157A1 (en) Machine learning inference on gravity aligned imagery
WO2021197466A1 (fr) Procédé, appareil et dispositif de détection de globe oculaire, et support de stockage
CN110060295A (zh) 目标定位方法及装置、控制装置、跟随设备及存储介质
CN109377518A (zh) 目标追踪方法、装置、目标追踪设备及存储介质
CN102725713A (zh) 操作输入设备
CN110858095A (zh) 可由头部操控的电子装置与其操作方法
CN113487670A (zh) 一种化妆镜及状态调整方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22885762

Country of ref document: EP

Kind code of ref document: A1