CN110969084B - Method and device for detecting attention area, readable storage medium and terminal equipment - Google Patents

Method and device for detecting attention area, readable storage medium and terminal equipment Download PDF

Info

Publication number
CN110969084B
CN110969084B CN201911042436.6A CN201911042436A CN110969084B CN 110969084 B CN110969084 B CN 110969084B CN 201911042436 A CN201911042436 A CN 201911042436A CN 110969084 B CN110969084 B CN 110969084B
Authority
CN
China
Prior art keywords
characteristic value
eye
position information
screen
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911042436.6A
Other languages
Chinese (zh)
Other versions
CN110969084A (en
Inventor
张�成
王杉杉
胡文泽
王孝宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN201911042436.6A priority Critical patent/CN110969084B/en
Publication of CN110969084A publication Critical patent/CN110969084A/en
Priority to PCT/CN2020/109069 priority patent/WO2021082636A1/en
Application granted granted Critical
Publication of CN110969084B publication Critical patent/CN110969084B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present application belongs to the field of image processing technologies, and in particular, to a method and an apparatus for detecting a region of interest, a computer-readable storage medium, and a terminal device. The method comprises the steps of obtaining an eye image to be detected; detecting eye key points in the eye image to obtain position information of each eye key point in the eye image; calculating a sight line characteristic value according to the position information of each eye key point in the eye image; and determining the eye attention area according to the sight line characteristic value. In the embodiment of the application, the eye-line characteristic value is calculated according to the position information of the eye key point by analyzing and processing the image of the eye image without using expensive precise instruments, so that the eye-line attention area is determined, the cost is greatly reduced, and the method can be widely applied.

Description

Method and device for detecting attention area, readable storage medium and terminal equipment
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to a method and an apparatus for detecting a region of interest, a computer-readable storage medium, and a terminal device.
Background
With the development of image recognition technology, a method of performing human-computer interaction by using the line of sight of human eyes is becoming a problem actively sought by researchers. In a business mode, the interest degree of a customer in a commodity can be judged according to the attention direction of the customer, and reasonable advertisement recommendation is further developed. This not only can bring novel shopping experience for the customer, can also bring better profit for the trade company. The relative position of the iris in the visible part of the eyeball moves with the change of the attention direction, which makes it possible to predict the attention direction through the key points of the eye. When the variable range of interest is small, the amount of displacement of the iris position is small, making it difficult to perform effective quantitative analysis and to accurately estimate the region of interest. In the prior art, the eye tracker device can track eye movement by using an infrared camera and a precise sensor, but the cost is very high, and the eye tracker device is difficult to be widely applied.
Disclosure of Invention
In view of this, embodiments of the present application provide a method and an apparatus for detecting a region of interest, a computer-readable storage medium, and a terminal device, so as to solve the problem that the existing method for detecting a region of interest is very expensive and is difficult to be widely applied.
A first aspect of an embodiment of the present application provides a method for detecting a region of interest, which may include:
acquiring an eye image to be detected;
detecting eye key points in the eye image to obtain position information of each eye key point in the eye image;
calculating a sight line characteristic value according to the position information of each eye key point in the eye image;
and determining the eye attention area according to the sight line characteristic value.
Further, each eye keypoint in the eye image comprises: an iris center point, a left canthus and a right canthus;
the calculating of the sight line feature value according to the position information of each eye key point in the eye image includes:
calculating a first characteristic distance according to the position information of the central point of the iris and the position information of the left canthus;
calculating a second characteristic distance according to the position information of the left canthus and the position information of the right canthus;
and calculating the sight line characteristic value according to the first characteristic distance and the second characteristic distance.
Further, the determining the eye attention region according to the sight line feature value comprises:
acquiring preset characteristic value intervals, wherein each characteristic value interval corresponds to a preset screen area;
determining a characteristic value interval in which the sight line characteristic value is positioned as a characteristic value target interval;
and determining the screen area corresponding to the characteristic value target interval as the eye attention area.
Further, before obtaining preset feature value intervals, the method for detecting the attention area may further include:
dividing a preset screen into SN screen areas, wherein SN is an integer larger than 1;
respectively constructing each calibration sample set, wherein the s calibration sample set comprises FSEach characteristic value sample is a sight line characteristic value when the s-th screen area is concerned, s is more than or equal to 1 and less than or equal to SN, FSIs a positive integer;
respectively calculating the average characteristic value of each calibration sample set;
and determining each characteristic value interval according to the average characteristic value of each calibration sample set.
Further, the separately constructing each calibration sample set comprises:
displaying a preset pattern at the center position of an s-th screen area of the screen;
respectively collecting sample images of each frame, wherein the sample images are eye images when the pattern is concerned by a subject;
respectively calculating the sight characteristic value of each frame of sample image;
and constructing the sight line characteristic value of each frame of sample image into an s-th calibration sample set.
Further, the determining each eigenvalue interval according to the average eigenvalue of each calibration sample set includes:
traversing each boundary parameter in a preset parameter interval;
respectively determining a characteristic value interval division mode corresponding to each demarcation parameter according to the average characteristic value of each calibration sample set;
respectively calculating the detection error rate of various characteristic value interval division modes according to each calibration sample set;
and selecting an optimal division mode, and determining each characteristic value interval according to the optimal division mode, wherein the optimal division mode is the characteristic value interval division mode with the minimum detection error rate.
Further, the performing eye key point detection in the eye image to obtain the position information of each eye key point in the eye image includes:
and detecting eye key points in the eye image by using a Stacked Hourglass Model to obtain the position information of each eye key point in the eye image.
A second aspect of an embodiment of the present application provides an apparatus for detecting a region of interest, which may include:
the eye image acquisition module is used for acquiring an eye image to be detected;
the eye key point detection module is used for detecting eye key points in the eye image to obtain position information of each eye key point in the eye image;
the sight line characteristic value calculating module is used for calculating sight line characteristic values according to the position information of each eye key point in the eye images;
and the eye attention region determining module is used for determining the eye attention region according to the sight line characteristic value.
Further, each eye keypoint in the eye image comprises: an iris center point, a left canthus and a right canthus;
the sight line feature value calculation module includes:
the first characteristic distance calculation submodule is used for calculating a first characteristic distance according to the position information of the iris center point and the position information of the left canthus;
the second characteristic distance calculation submodule is used for calculating a second characteristic distance according to the position information of the left canthus and the position information of the right canthus;
and the sight line characteristic value operator module is used for calculating the sight line characteristic value according to the first characteristic distance and the second characteristic distance.
Further, the eye region of interest determination module comprises:
the characteristic value interval acquisition submodule is used for acquiring preset characteristic value intervals, wherein each characteristic value interval corresponds to a preset screen area;
the characteristic value target interval determining submodule is used for determining a characteristic value interval in which the sight line characteristic value is positioned as a characteristic value target interval;
and the eye attention region determining submodule is used for determining the screen region corresponding to the characteristic value target interval as the eye attention region.
Further, the region of interest detecting apparatus may further include:
the screen area dividing module is used for dividing a preset screen into SN screen areas, wherein SN is an integer larger than 1;
a calibration sample set constructing module for respectively constructing each calibration sample set, wherein the s-th calibration sample set comprises FSEach characteristic value sample is a sight line characteristic value when the s-th screen area is concerned, s is more than or equal to 1 and less than or equal to SN, FSIs a positive integer;
the average characteristic value calculation module is used for calculating the average characteristic value of each calibration sample set respectively;
and the characteristic value interval determining module is used for determining each characteristic value interval according to the average characteristic value of each calibration sample set.
Further, the calibration sample set construction module may include:
the pattern display submodule is used for displaying a preset pattern at the center of the s-th screen area of the screen;
the sample image acquisition submodule is used for respectively acquiring each frame of sample image, and the sample image is an eye image when the pattern is concerned by the subject;
the sample characteristic value operator module is used for respectively calculating sight line characteristic values of all frames of sample images;
and the calibration sample set constructing submodule is used for constructing the sight characteristic value of each frame of sample image into an s-th calibration sample set.
Further, the feature value interval determination module may include:
the parameter traversing submodule is used for traversing each boundary parameter in a preset parameter interval;
the division mode determining submodule is used for respectively determining a characteristic value interval division mode corresponding to each demarcation parameter according to the average characteristic value of each calibration sample set;
the detection error rate calculation submodule is used for calculating the detection error rates of various characteristic value interval division modes according to various calibration sample sets;
and the characteristic value interval determining submodule is used for selecting an optimal division mode and determining each characteristic value interval according to the optimal division mode, wherein the optimal division mode is the characteristic value interval division mode with the minimum detection error rate.
Further, the eye keypoint detection module is specifically configured to perform eye keypoint detection in the eye image by using a Stacked Hourglass Model, so as to obtain position information of each eye keypoint in the eye image.
A third aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program, which when executed by a processor implements the steps of any one of the above-mentioned region-of-interest detection methods.
A fourth aspect of the embodiments of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of any one of the above-mentioned region-of-interest detection methods when executing the computer program.
A fifth aspect of embodiments of the present application provides a computer program product, which, when run on a terminal device, causes the terminal device to perform the steps of any of the above-mentioned region of interest detection methods.
Compared with the prior art, the embodiment of the application has the advantages that: the method includes the steps that an eye image to be detected is obtained; detecting eye key points in the eye image to obtain position information of each eye key point in the eye image; calculating a sight line characteristic value according to the position information of each eye key point in the eye image; and determining the eye attention area according to the sight line characteristic value. In the embodiment of the application, the eye-line characteristic value is calculated according to the position information of the eye key point by analyzing and processing the image of the eye image without using expensive precise instruments, so that the eye-line attention area is determined, the cost is greatly reduced, and the method can be widely applied.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of an embodiment of a method for detecting a region of interest in an embodiment of the present application;
FIG. 2 is a schematic diagram of various eye keypoints in an eye image;
FIG. 3 is a schematic flow chart of calculating a gaze feature value based on location information of each eye keypoint in an eye image;
FIG. 4 is a schematic view of a first feature distance;
FIG. 5 is a schematic view of a second feature distance;
FIG. 6 is a schematic flow chart of setting each eigenvalue interval;
FIG. 7 is a diagram illustrating a screen area division;
FIG. 8 is a schematic flow diagram of a calibration sample set construction process;
FIG. 9 is a schematic view of the center position of various screen regions;
fig. 10 is a schematic view showing a pattern display in the center position of each screen area in sequence;
FIG. 11 is a schematic flow chart of determining each eigenvalue interval from the average eigenvalue of each calibration sample set;
fig. 12 is a block diagram of an embodiment of a device for detecting a region of interest in an embodiment of the present application;
fig. 13 is a schematic block diagram of a terminal device in an embodiment of the present application.
Detailed Description
In order to make the objects, features and advantages of the present invention more apparent and understandable, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the embodiments described below are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In addition, in the description of the present application, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
Referring to fig. 1, an embodiment of a method for detecting a region of interest in an embodiment of the present application may include:
and S101, acquiring an eye image to be detected.
The execution main body of the embodiment of the application can be terminal equipment with a camera and a screen, including but not limited to desktop computers, notebooks, palmtop computers, smart phones and smart televisions.
In a specific implementation of the embodiment of the application, when a user watches a screen of the terminal device, the terminal device may acquire an image through a camera facing a direction of the user, detect a face image from the image by using a face detection algorithm, and further extract an eye image from the image. The face detection algorithm and the extraction of the eye image are both techniques commonly used in the prior art, and specific contents in the prior art can be specifically referred to, which is not described in detail in the embodiment of the present application.
Step S102, eye key point detection is carried out in the eye image, and position information of each eye key point in the eye image is obtained.
As shown in fig. 2, each eye keypoint in the eye image includes, but is not limited to: iris center point, iris upper edge, iris lower edge, iris left edge, iris right edge, upper eyelid edge, lower eyelid edge, left canthus, and right canthus.
In a specific implementation of the embodiment of the present application, a Stacked hour glass Model (SHM) may be used to perform eye keypoint detection in the eye image, so as to obtain position information of each eye keypoint in the eye image. The stacked hourglass model can perform multi-scale transformation on images, ensures that a large receptive field (reflective field) is obtained, and has better generalization performance on blurred images, so that the embodiment of the application can obtain higher accuracy under the condition of using a common camera.
Of course, in other specific implementations of the embodiment of the present application, other detection models commonly used in the prior art may be used to perform the eye keypoint detection in the eye image, and the embodiment of the present application is not particularly limited thereto.
Step S103, calculating a sight characteristic value according to the position information of each eye key point in the eye image.
In a specific implementation of the embodiment of the present application, step S103 may specifically include the process shown in fig. 3:
and step S1031, calculating a first characteristic distance according to the position information of the central point of the iris and the position information of the left eye corner.
Here, the position information of the iris center point may be written as: (IrisCtX, IrisCtY), where IrisCtX is an abscissa of the position of the iris center point, and IrisCtY is an ordinate of the position of the iris center point, and the position information of the left canthus is recorded as: (lfcanthus x, lfcanthus y), where lfcanthus x is the abscissa of the position of the left canthus, and lfcanthus y is the ordinate of the position of the left canthus, and the first feature distance is recorded as Dis1, the first feature distance may be calculated according to the following equation:
Dis1=|IrisCtX-LfCanthusX|
fig. 4 is a schematic diagram of the first characteristic distance.
Step S1032 calculates a second feature distance according to the position information of the left corner of the eye and the position information of the right corner of the eye.
Here, the position information of the right corner of the eye may be written as: (rtauthusx, rtauthusy), where rtauthusx is the abscissa of the position of the right canthus, and rtauthusy is the ordinate of the position of the right canthus, and the second characteristic distance is recorded as Dis2, the second characteristic distance may be calculated according to the following equation:
Dis2=|RtCanthusX-LfCanthusX|
fig. 5 is a schematic diagram of the second characteristic distance.
And step S1033, calculating the sight line characteristic value according to the first characteristic distance and the second characteristic distance.
Specifically, the ratio between the first characteristic distance and the second characteristic distance may be determined as the sight-line characteristic value, that is:
r=Dis1/Dis2
wherein r is the sight line characteristic value.
Through the process shown in fig. 3, the ratio of the first characteristic distance (i.e., the distance between the center point of the iris and the left corner of the eye) to the second characteristic distance (i.e., the distance between the left corner of the eye and the right corner of the eye) is used as the sight line characteristic value, which can accurately reflect the relative position of the iris on the eye, and the relative position of the iris on the eye changes correspondingly with the change of the attention area, so that the accuracy of the detection result of the attention area can be greatly improved based on the characteristic value.
It should be noted that the process shown in fig. 3 is only a specific way to calculate the sight line feature value, in the calculation process, position information of several eye key points, namely, an iris center point, a left eye corner and a right eye corner, is used, and in practical applications, position information of other eye key points may be selected according to specific situations to calculate the sight line feature value, which is not specifically limited in this embodiment of the present application.
And step S104, determining an eye attention area according to the sight line characteristic value.
Specifically, each preset feature value interval is obtained first, wherein each feature value interval corresponds to a preset screen region, then the feature value interval in which the sight line feature value is located is determined as a feature value target interval, and finally the screen region corresponding to the feature value target interval is determined as the eye attention region.
In this embodiment of the present application, the screen of the terminal device may be divided into SN (SN is an integer greater than 1) screen regions in the horizontal direction in advance, and the SN screen regions may be sequentially recorded as: the screen area comprises a screen area 1, screen areas 2 and …, screen areas s and … and a screen area SN, wherein s is more than or equal to 1 and less than or equal to SN, and the characteristic value interval corresponding to the screen area 1 is [ r [ r ] ]1,2,MaxVal]The characteristic value interval corresponding to the screen region 2 is [ r ]2,3,r1,2) …, the characteristic value interval corresponding to the screen area s is [ rs,s+1,rs-1,s) …, the characteristic value interval corresponding to the screen region SN is [ MinVal, rSN-1,SN),r1,2Is the boundary value of screen region 1 and screen region 2, r2,3Is the boundary value of screen area 2 and screen area 3, …, rs-1,sIs the boundary value between the screen region s-1 and the screen region s, …, rSN-1,SNThe screen area SN-1 and the screen area SN are boundary values, MinVal is a preset minimum value, and MaxVal is a preset maximum value.
If r1,2R ≦ MaxVal, screen region 1 may be determined as the eye-focus region if r is2,3≤r<r1,2Then screen region 2 may be determined as the eye region of interest, …, if rs,s+1≤r<rs-1,sThen the screen region s may be determined as the eye region of interest, …, if MinVal ≦ r<rSN-1,SNThen the screen region SN may be determined as the eye attention region.
Through the characteristic value interval of presetting each screen area, after the sight characteristic value is obtained through calculation, the corresponding eye attention area can be determined only by judging which characteristic value interval the sight characteristic value belongs to, the calculation amount is extremely small, and the efficiency of attention area detection is greatly improved.
Preferably, before step S104, the respective feature value intervals may be set in advance through the process shown in fig. 6:
step S601, dividing the screen into SN screen regions.
In this embodiment of the present application, the screen of the terminal device may be divided into SN screen regions in the horizontal direction, and the SN screen regions are sequentially recorded as: screen region 1, screen region 2, …, screen region s, …, screen region SN.
Fig. 7 shows the screen area division when SN is 3.
And step S602, constructing each calibration sample set respectively.
Wherein the s-th calibration sample set comprises FSMedicine for treating chronic rhinitisCharacteristic value samples, wherein each characteristic value sample is a sight line characteristic value when the s-th screen area is concerned, FSIs a positive integer.
Taking the s-th calibration sample set as an example, the specific construction process may include the steps shown in fig. 8:
and step S6021, displaying a preset pattern at the center position of the S screen area of the screen.
Fig. 9 is a schematic diagram showing the center position (position indicated by a circle) of each screen region when SN is 3. In constructing the s-th calibration sample set, the gaze of the subject may be attracted to the s-th screen region by displaying a preset pattern at a central position of the s-th screen region of the screen. The pattern may be configured as a circular pattern, a square pattern, a triangular pattern, etc. according to practical situations, and the form of the pattern is not particularly limited in the embodiments of the present application. It should be noted that, in order to achieve a better attention-attracting effect, the pattern may be in a color that is relatively different from the background color of the screen, for example, if the background color of the screen is black, the pattern may be displayed in white, red, green, purple, or the like.
And step S6022, respectively collecting the sample images of each frame.
The sample image is an eye image of the subject when the pattern is of interest. After the pattern is displayed at the center of the s-th screen area of the screen, the eye of the subject is attracted to the pattern, and at this time, a plurality of frames of eye images of the subject paying attention to the pattern, that is, the sample image, may be sequentially acquired by using a camera. For example, if a sample image is acquired every 1 second for a duration of 20 seconds, a total of 20 sample images may be acquired.
Preferably, in order to avoid the influence of errors caused by distractions due to vision conversion and fatigue, the first acquired frame sample images and the last acquired frame sample images can be eliminated. For example, if 20 sample images are acquired in total, the first 3 sample images and the last 3 sample images can be eliminated, and 14 sample images are remained.
Step S6023 calculates the sight line feature value of each frame sample image.
The process of calculating the sight line feature value of each frame sample image is similar to the process in step S103, and specific reference may be made to the detailed process in step S103, which is not described herein again.
And step S6024, constructing the sight characteristic value of each frame of sample image into an S-th calibration sample set.
Taking the remaining 14 frames of sample images (i.e. the 4 th frame to the 17 th frame of the originally acquired 20 frames of sample images) as an example, the sight line feature values of the sample images of the frames are sequentially recorded as:
Figure BDA0002252238700000102
let the s-th calibration sample set be RiThen, there are:
Figure BDA0002252238700000101
according to the process shown in fig. 8, the values of s from 1 to SN are traversed, and then each calibration sample set can be constructed. As shown in fig. 10, taking the case that SN is 3 as an example, first, a pattern is displayed at the center position of the 1 st screen region (i.e., the left screen region), and a 1 st calibration sample set is constructed according to the acquired frame sample images, then a pattern is displayed at the center position of the 2 nd screen region (i.e., the middle screen region), and a 2 nd calibration sample set is constructed according to the acquired frame sample images, and finally a pattern is displayed at the center position of the 3 rd screen region (i.e., the right screen region), and a 3 rd calibration sample set is constructed according to the acquired frame sample images.
Through the process shown in fig. 8, for each screen region, each frame of sample image when the subject pays attention to the screen region is acquired in an actual measurement manner, the sight characteristic value of each frame of sample image is obtained through calculation, and a corresponding calibration sample set is constructed, so that a large amount of data basis is provided for subsequent detection of the attention region, and the final detection result has higher accuracy.
And step S603, calculating the average characteristic value of each calibration sample set respectively.
For any calibration sample set, all the sight line characteristic values included in the calibration sample set may be averaged to obtain an average characteristic value of the calibration sample set, or the calibration sample set may be downsampled, that is, a part of the sight line characteristic values included in the calibration sample set is selected to be averaged to obtain the average characteristic value of the calibration sample set.
The mean eigenvalue of the s calibration sample set is denoted as r heresThen, the average characteristic value of each calibration sample set is: r is1、r2、…、rs…、rSN
And step S604, determining each characteristic value interval according to the average characteristic value of each calibration sample set.
Through the process shown in fig. 6, a large number of characteristic value samples obtained through actual testing are collected in advance, calibration sample sets corresponding to the screen regions are constructed, and the actually measured data are used as a basis to determine the characteristic value intervals, so that the finally obtained characteristic value intervals are more in line with the actual situation, and the eye attention region detected on the basis of the actual situation has higher accuracy.
In a specific implementation of the embodiment of the present application, step S604 may specifically include the process shown in fig. 11:
step S6041, traversing each boundary parameter within a preset parameter interval.
Here, the boundary parameter is denoted as α, and the desirable parameter interval may be set according to an actual situation, for example, the parameter interval may be set to [0.4,0.6], and if the traversal value is performed at an interval of 0.02, the first boundary parameter is selected as: α is 0.4, and the second demarcation parameter is selected as: α is 0.42, and the third demarcation parameter is selected as: and alpha is 0.44, …, and so on, and 11 demarcation parameters can be selected.
And step S6042, respectively determining a characteristic value interval division mode corresponding to each demarcation parameter according to the average characteristic value of each calibration sample set.
For any one of the boundary parameters, the corresponding eigenvalue interval division mode is specifically as follows:
boundary value r of screen region 1 and screen region 21,2=r2+α(r1-r2);
Boundary value r of screen region 2 and screen region 32,3=r3+α(r2-r3);
……
Boundary value r of screen area s-1 and screen area ss-1,s=rs+α(rs-1-rs);
……
Boundary value r of screen region SN-1 and screen region SNSN-1,SN=rSN+α(rSN-1-rSN)。
The characteristic value interval corresponding to the screen area 1 is [ r ]1,2,MaxVal]The characteristic value interval corresponding to the screen region 2 is [ r ]2,3,r1,2) …, the characteristic value interval corresponding to the screen area s is [ rs,s+1,rs-1,s) …, the characteristic value interval corresponding to the screen region SN is [ MinVal, rSN-1,SN),r1,2Is the boundary value of screen region 1 and screen region 2, r2,3Is the boundary value of screen area 2 and screen area 3, …, rs-1,sIs the boundary value between the screen region s-1 and the screen region s, …, rSN-1,SNThe screen area SN-1 and the screen area SN are boundary values, MinVal is a preset minimum value, and MaxVal is a preset maximum value.
And step S6043, respectively calculating the detection error rates of various characteristic value interval division modes according to the calibration sample sets.
For any characteristic value interval division mode, all characteristic value samples contained in SN calibration sample sets are used for verifying the division mode. Specifically, if a certain characteristic value sample in the S-th calibration sample set determines that the eye attention region is the S-th screen region according to step S104, it is indicated that the characteristic value sample is successfully verified, and if a certain characteristic value sample in the S-th calibration sample set determines that the eye attention region is not the S-th screen region according to step S104, it is indicated that the characteristic value sample is unsuccessfully verified. Counting the total number of the feature value samples failed in verification, which is recorded as FN, counting the total number of the feature value samples successful in verification, which is recorded as TN, and recording the detection error rate of the partition method as FailRatio, then: FailRatio is FN/(FN + TN).
And S6044, selecting an optimal division mode, and determining each characteristic value interval according to the optimal division mode.
The optimal division mode is a characteristic value interval division mode with the minimum detection error rate.
Through the process shown in fig. 11, the boundary parameters are sequentially traversed, the detection error rates of the various eigenvalue interval division modes are respectively calculated by using the measured data, the division mode with the minimum detection error rate, that is, the optimal division mode is selected from the calculated division modes, and the division mode is subsequently used for detecting the region of interest, so that higher accuracy can be obtained.
In summary, the embodiment of the present application obtains an eye image to be detected; detecting eye key points in the eye image to obtain position information of each eye key point in the eye image; calculating a sight line characteristic value according to the position information of each eye key point in the eye image; and determining the eye attention area according to the sight line characteristic value. In the embodiment of the application, the eye-line characteristic value is calculated according to the position information of the eye key point by analyzing and processing the image of the eye image without using expensive precise instruments, so that the eye-line attention area is determined, the cost is greatly reduced, and the method can be widely applied.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 12 is a diagram illustrating an embodiment of a structure of a region of interest detection apparatus according to an embodiment of the present application, in which the structure corresponds to a region of interest detection method described in the foregoing embodiment.
In this embodiment, an attention area detecting apparatus may include:
an eye image obtaining module 1201, configured to obtain an eye image to be detected;
an eye key point detection module 1202, configured to perform eye key point detection in the eye image to obtain position information of each eye key point in the eye image;
a sight line feature value calculation module 1203, configured to calculate a sight line feature value according to position information of each eye key point in the eye image;
an eye attention region determining module 1204, configured to determine an eye attention region according to the sight line feature value.
Further, each eye keypoint in the eye image comprises: an iris center point, a left canthus and a right canthus;
the sight line feature value calculation module includes:
the first characteristic distance calculation submodule is used for calculating a first characteristic distance according to the position information of the iris center point and the position information of the left canthus;
the second characteristic distance calculation submodule is used for calculating a second characteristic distance according to the position information of the left canthus and the position information of the right canthus;
and the sight line characteristic value operator module is used for calculating the sight line characteristic value according to the first characteristic distance and the second characteristic distance.
Further, the eye region of interest determination module comprises:
the characteristic value interval acquisition submodule is used for acquiring preset characteristic value intervals, wherein each characteristic value interval corresponds to a preset screen area;
the characteristic value target interval determining submodule is used for determining a characteristic value interval in which the sight line characteristic value is positioned as a characteristic value target interval;
and the eye attention region determining submodule is used for determining the screen region corresponding to the characteristic value target interval as the eye attention region.
Further, the region of interest detecting apparatus may further include:
the screen area dividing module is used for dividing a preset screen into SN screen areas, wherein SN is an integer larger than 1;
a calibration sample set constructing module for respectively constructing each calibration sample set, wherein the s-th calibration sample set comprises FSEach characteristic value sample is a sight line characteristic value when the s-th screen area is concerned, s is more than or equal to 1 and less than or equal to SN, FSIs a positive integer;
the average characteristic value calculation module is used for calculating the average characteristic value of each calibration sample set respectively;
and the characteristic value interval determining module is used for determining each characteristic value interval according to the average characteristic value of each calibration sample set.
Further, the calibration sample set construction module may include:
the pattern display submodule is used for displaying a preset pattern at the center of the s-th screen area of the screen;
the sample image acquisition submodule is used for respectively acquiring each frame of sample image, and the sample image is an eye image when the pattern is concerned by the subject;
the sample characteristic value operator module is used for respectively calculating sight line characteristic values of all frames of sample images;
and the calibration sample set constructing submodule is used for constructing the sight characteristic value of each frame of sample image into an s-th calibration sample set.
Further, the feature value interval determination module may include:
the parameter traversing submodule is used for traversing each boundary parameter in a preset parameter interval;
the division mode determining submodule is used for respectively determining a characteristic value interval division mode corresponding to each demarcation parameter according to the average characteristic value of each calibration sample set;
the detection error rate calculation submodule is used for calculating the detection error rates of various characteristic value interval division modes according to various calibration sample sets;
and the characteristic value interval determining submodule is used for selecting an optimal division mode and determining each characteristic value interval according to the optimal division mode, wherein the optimal division mode is the characteristic value interval division mode with the minimum detection error rate.
Further, the eye keypoint detection module is specifically configured to perform eye keypoint detection in the eye image by using a Stacked Hourglass Model, so as to obtain position information of each eye keypoint in the eye image.
In the embodiment of the application, the eye-line characteristic value is calculated according to the position information of the eye key point by analyzing and processing the image of the eye image without using expensive precise instruments, so that the eye-line attention area is determined, the cost is greatly reduced, and the method can be widely applied.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses, modules, sub-modules and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Fig. 13 shows a schematic block diagram of a terminal device provided in an embodiment of the present application, and only shows a part related to the embodiment of the present application for convenience of description.
As shown in fig. 13, the terminal device 13 of this embodiment includes: a processor 130, a memory 131 and a computer program 132 stored in the memory 131 and executable on the processor 130. The processor 130 implements the steps in the above-described embodiments of the region of interest detection method, such as the steps S101 to S104 shown in fig. 1, when executing the computer program 132. Alternatively, the processor 130 implements the functions of the modules/units in the above device embodiments, for example, the functions of the modules 1201 to the module 1204 shown in fig. 12, when executing the computer program 132.
Illustratively, the computer program 132 may be partitioned into one or more modules/units that are stored in the memory 131 and executed by the processor 130 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 132 in the terminal device 13.
The terminal device 13 may be a desktop computer, a notebook, a palm computer, a smart phone, a smart television, or other computing devices. Those skilled in the art will appreciate that fig. 13 is only an example of the terminal device 13, and does not constitute a limitation to the terminal device 13, and may include more or less components than those shown, or combine some components, or different components, for example, the terminal device 13 may further include an input-output device, a network access device, a bus, etc.
The Processor 130 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The processor 130 may be a neural center and a command center of the terminal device 13, and the processor 130 may generate an operation control signal according to the instruction operation code and the timing signal, so as to complete the control of instruction fetching and instruction execution.
The storage 131 may be an internal storage unit of the terminal device 13, such as a hard disk or a memory of the terminal device 13. The memory 131 may also be an external storage device of the terminal device 13, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 13. Further, the memory 131 may also include both an internal storage unit and an external storage device of the terminal device 13. The memory 131 is used for storing the computer programs and other programs and data required by the terminal device 13. The memory 131 may also be used to temporarily store data that has been output or is to be output.
The terminal device 13 may further include a Communication module, and the Communication module may provide a solution for Communication applied to a network device, including Wireless Local Area Networks (WLANs) (such as Wi-Fi Networks), bluetooth, Zigbee, mobile Communication Networks, Global Navigation Satellite Systems (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (Infrared, IR), and the like. The communication module may be one or more devices integrating at least one communication processing module. The communication module may include an antenna, and the antenna may have only one array element, or may be an antenna array including a plurality of array elements. The communication module can receive electromagnetic waves through the antenna, frequency-modulate and filter electromagnetic wave signals, and send the processed signals to the processor. The communication module can also receive a signal to be sent from the processor, frequency-modulate and amplify the signal, and convert the signal into electromagnetic wave to radiate the electromagnetic wave through the antenna.
The terminal device 13 may further include a power management module, and the power management module may receive an input of an external power source, a battery and/or a charger, and supply power to the processor, the memory, the communication module, and the like.
The terminal device 13 may also include a display module operable to display information entered by the user or provided to the user. The Display module may include a Display panel, and optionally, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel may cover the display panel, and when the touch panel detects a touch operation thereon or nearby, the touch panel transmits the touch operation to the processor to determine the type of the touch event, and then the processor provides a corresponding visual output on the display panel according to the type of the touch event.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The embodiments of the present application provide a computer program product, which when running on the terminal device, enables the terminal device to implement the steps in the above-mentioned method embodiments.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (9)

1. A method for detecting a region of interest, comprising:
acquiring an eye image to be detected;
detecting eye key points in the eye image to obtain position information of each eye key point in the eye image;
calculating a sight line characteristic value according to the position information of each eye key point in the eye image;
determining an eye attention area according to the sight line characteristic value and each preset characteristic value interval;
wherein, the setting process of each characteristic value interval comprises the following steps:
traversing each boundary parameter in a preset parameter interval;
respectively determining a characteristic value interval division mode corresponding to each demarcation parameter according to the average characteristic value of each calibration sample set; wherein each calibration sample set corresponds to a preset screen region, and the characteristic value interval corresponding to the screen region s is [ r [ ]s,s+1,rs-1,s),rs-1,s=rs+α(rs-1-rs),rsIs the average characteristic value of the s-th calibration sample set, alpha is the demarcation parameter, s is more than or equal to 1 and less than or equal to SN, SN is the number of screen areas, rs-1,sThe boundary value of the screen area s-1 and the screen area s is obtained;
respectively calculating the detection error rate of various characteristic value interval division modes according to each calibration sample set;
and selecting an optimal division mode, and determining each characteristic value interval according to the optimal division mode, wherein the optimal division mode is the characteristic value interval division mode with the minimum detection error rate.
2. The method according to claim 1, wherein each eye key point in the eye image comprises: an iris center point, a left canthus and a right canthus;
the calculating of the sight line feature value according to the position information of each eye key point in the eye image includes:
calculating a first characteristic distance according to the position information of the central point of the iris and the position information of the left canthus;
calculating a second characteristic distance according to the position information of the left canthus and the position information of the right canthus;
and calculating the sight line characteristic value according to the first characteristic distance and the second characteristic distance.
3. The method according to claim 1, wherein the determining an eye region of interest from the gaze feature value includes:
acquiring preset characteristic value intervals, wherein each characteristic value interval corresponds to a preset screen area;
determining a characteristic value interval in which the sight line characteristic value is positioned as a characteristic value target interval;
and determining the screen area corresponding to the characteristic value target interval as the eye attention area.
4. The method according to claim 3, further comprising, before acquiring each preset feature value interval:
dividing a preset screen into SN screen areas;
respectively constructing each calibration sample set, wherein the s calibration sample set comprises FSEach feature value sample is a sight line feature value when the s-th screen area is concerned, FSIs a positive integer;
respectively calculating the average characteristic value of each calibration sample set;
and determining each characteristic value interval according to the average characteristic value of each calibration sample set.
5. The method according to claim 4, wherein the separately constructing each calibration sample set comprises:
displaying a preset pattern at the center position of an s-th screen area of the screen;
respectively collecting sample images of each frame, wherein the sample images are eye images when the pattern is concerned by a subject;
respectively calculating the sight characteristic value of each frame of sample image;
and constructing the sight line characteristic value of each frame of sample image into an s-th calibration sample set.
6. An area of interest detection apparatus, comprising:
the eye image acquisition module is used for acquiring an eye image to be detected;
the eye key point detection module is used for detecting eye key points in the eye image to obtain position information of each eye key point in the eye image;
the sight line characteristic value calculating module is used for calculating sight line characteristic values according to the position information of each eye key point in the eye images;
the eye attention area determining module is used for determining an eye attention area according to the sight line characteristic value and each preset characteristic value interval;
the characteristic value interval determining module is used for determining each characteristic value interval and comprises the following steps:
the parameter traversing submodule is used for traversing each boundary parameter in a preset parameter interval;
the division mode determining submodule is used for respectively determining a characteristic value interval division mode corresponding to each demarcation parameter according to the average characteristic value of each calibration sample set; wherein each calibration sample set corresponds to a preset screen region, and the characteristic value interval corresponding to the screen region s is [ r [ ]s,s+1,rs-1,s),rs-1,s=rs+α(rs-1-rs),rsIs the average characteristic value of the s-th calibration sample set, alpha is the demarcation parameter, s is more than or equal to 1 and less than or equal to SN, SN is the number of screen areas, rs-1,sThe boundary value of the screen area s-1 and the screen area s is obtained;
the detection error rate calculation submodule is used for calculating the detection error rates of various characteristic value interval division modes according to various calibration sample sets;
and the characteristic value interval determining submodule is used for selecting an optimal division mode and determining each characteristic value interval according to the optimal division mode, wherein the optimal division mode is the characteristic value interval division mode with the minimum detection error rate.
7. The apparatus according to claim 6, wherein each eye key point in the eye image includes: an iris center point, a left canthus and a right canthus;
the sight line feature value calculation module includes:
the first characteristic distance calculation submodule is used for calculating a first characteristic distance according to the position information of the iris center point and the position information of the left canthus;
the second characteristic distance calculation submodule is used for calculating a second characteristic distance according to the position information of the left canthus and the position information of the right canthus;
and the sight line characteristic value operator module is used for calculating the sight line characteristic value according to the first characteristic distance and the second characteristic distance.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the region of interest detection method according to any one of claims 1 to 5.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the region of interest detection method according to any of claims 1 to 5 when executing the computer program.
CN201911042436.6A 2019-10-29 2019-10-29 Method and device for detecting attention area, readable storage medium and terminal equipment Active CN110969084B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911042436.6A CN110969084B (en) 2019-10-29 2019-10-29 Method and device for detecting attention area, readable storage medium and terminal equipment
PCT/CN2020/109069 WO2021082636A1 (en) 2019-10-29 2020-08-14 Region of interest detection method and apparatus, readable storage medium and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911042436.6A CN110969084B (en) 2019-10-29 2019-10-29 Method and device for detecting attention area, readable storage medium and terminal equipment

Publications (2)

Publication Number Publication Date
CN110969084A CN110969084A (en) 2020-04-07
CN110969084B true CN110969084B (en) 2021-03-05

Family

ID=70030164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911042436.6A Active CN110969084B (en) 2019-10-29 2019-10-29 Method and device for detecting attention area, readable storage medium and terminal equipment

Country Status (2)

Country Link
CN (1) CN110969084B (en)
WO (1) WO2021082636A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969084B (en) * 2019-10-29 2021-03-05 深圳云天励飞技术有限公司 Method and device for detecting attention area, readable storage medium and terminal equipment
CN116682071B (en) * 2023-08-04 2023-11-10 浙江大华技术股份有限公司 Commodity interest information analysis method, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662470A (en) * 2012-04-01 2012-09-12 西华大学 Method and system for implementation of eye operation
CN106934365A (en) * 2017-03-09 2017-07-07 广东顺德中山大学卡内基梅隆大学国际联合研究院 A kind of reliable glaucoma patient self-detection method

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100343867C (en) * 2005-06-15 2007-10-17 北京中星微电子有限公司 Method and apparatus for distinguishing direction of visual lines
CN102662476B (en) * 2012-04-20 2015-01-21 天津大学 Gaze estimation method
CN102930278A (en) * 2012-10-16 2013-02-13 天津大学 Human eye sight estimation method and device
CN103809737A (en) * 2012-11-13 2014-05-21 华为技术有限公司 Method and device for human-computer interaction
BR112016019529B1 (en) * 2014-02-25 2023-03-07 Eyeverify Inc METHOD IMPLEMENTED BY COMPUTER AND SYSTEM FOR TRACKING THE EYE GAZE OF A USER
CN103885589B (en) * 2014-03-06 2017-01-25 华为技术有限公司 Eye movement tracking method and device
KR20180028796A (en) * 2016-09-09 2018-03-19 삼성전자주식회사 Method, storage medium and electronic device for displaying images
CN106529409B (en) * 2016-10-10 2019-08-09 中山大学 A kind of eye gaze visual angle measuring method based on head pose
CN107193383B (en) * 2017-06-13 2020-04-07 华南师范大学 Secondary sight tracking method based on face orientation constraint
JP6946831B2 (en) * 2017-08-01 2021-10-13 オムロン株式会社 Information processing device and estimation method for estimating the line-of-sight direction of a person, and learning device and learning method
CN108875524B (en) * 2018-01-02 2021-03-02 北京旷视科技有限公司 Sight estimation method, device, system and storage medium
CN108875526B (en) * 2018-01-05 2020-12-25 北京旷视科技有限公司 Method, device and system for line-of-sight detection and computer storage medium
CN108985210A (en) * 2018-07-06 2018-12-11 常州大学 A kind of Eye-controlling focus method and system based on human eye geometrical characteristic
CN109901716B (en) * 2019-03-04 2022-08-26 厦门美图之家科技有限公司 Sight point prediction model establishing method and device and sight point prediction method
CN110969084B (en) * 2019-10-29 2021-03-05 深圳云天励飞技术有限公司 Method and device for detecting attention area, readable storage medium and terminal equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662470A (en) * 2012-04-01 2012-09-12 西华大学 Method and system for implementation of eye operation
CN106934365A (en) * 2017-03-09 2017-07-07 广东顺德中山大学卡内基梅隆大学国际联合研究院 A kind of reliable glaucoma patient self-detection method

Also Published As

Publication number Publication date
WO2021082636A1 (en) 2021-05-06
CN110969084A (en) 2020-04-07

Similar Documents

Publication Publication Date Title
CN110909611B (en) Method and device for detecting attention area, readable storage medium and terminal equipment
CN111046744B (en) Method and device for detecting attention area, readable storage medium and terminal equipment
CN109360396B (en) Remote meter reading method and system based on image recognition technology and NB-IoT technology
CN104200480B (en) A kind of image blur evaluation method and system applied to intelligent terminal
KR20180106527A (en) Electronic device and method for identifying falsification of biometric information
CN109345553B (en) Palm and key point detection method and device thereof, and terminal equipment
CN110335157A (en) Insurance products recommended method, equipment and storage medium
CN110969084B (en) Method and device for detecting attention area, readable storage medium and terminal equipment
CN113192639B (en) Training method, device, equipment and storage medium of information prediction model
WO2016168555A1 (en) Monitoring parking rule violations
CN111798521A (en) Calibration method, calibration device, storage medium and electronic equipment
CN110008815A (en) The generation method and device of recognition of face Fusion Model
CN117727102A (en) Face recognition detection method and electronic equipment
CN111161789B (en) Analysis method and device for key areas of model prediction
CN111281355A (en) Method and equipment for determining pulse acquisition position
CN112203131B (en) Prompting method and device based on display equipment and storage medium
CN110705447B (en) Hand image detection method and device and electronic equipment
KR20180106772A (en) Electronic device for perfoming pay and operation method of thereof
CN112633143A (en) Image processing system, method, head-mounted device, processing device, and storage medium
CN111325316A (en) Training data generation method and device
CN112766759B (en) Refueling management method and system for logistics enterprises
CN113227708B (en) Method and device for determining pitch angle and terminal equipment
CN114120162A (en) Model testing method, device, terminal and storage medium
CN112948691B (en) Method and device for calculating experience index of entity place
CN110765861A (en) Unlicensed vehicle type identification method and device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant