CN111767821B - Method, device, equipment and storage medium for identifying focused object - Google Patents

Method, device, equipment and storage medium for identifying focused object Download PDF

Info

Publication number
CN111767821B
CN111767821B CN202010582728.5A CN202010582728A CN111767821B CN 111767821 B CN111767821 B CN 111767821B CN 202010582728 A CN202010582728 A CN 202010582728A CN 111767821 B CN111767821 B CN 111767821B
Authority
CN
China
Prior art keywords
face
left eye
right eye
eye
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010582728.5A
Other languages
Chinese (zh)
Other versions
CN111767821A (en
Inventor
王占亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Holding Co Ltd
Original Assignee
Jingdong Technology Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Holding Co Ltd filed Critical Jingdong Technology Holding Co Ltd
Priority to CN202010582728.5A priority Critical patent/CN111767821B/en
Publication of CN111767821A publication Critical patent/CN111767821A/en
Application granted granted Critical
Publication of CN111767821B publication Critical patent/CN111767821B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0242Determining effectiveness of advertisements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Abstract

The object is focused on whether the face, the left eye and the right eye are effectively focused or not is judged respectively by detecting the face yaw angle and the face pitch angle in the image to be recognized and detecting the left eye yaw angle, the left eye pitch angle, the right eye yaw angle and the right eye pitch angle of the eyes in the image to be recognized, and comprehensively judging whether the face yaw angle, the face pitch angle, the left eye yaw angle, the left eye pitch angle, the right eye yaw angle and the right eye pitch angle are effectively focused or not, whether the object is focused or not is determined based on the result of judging whether the face, the left eye and the right eye are effectively focused or not, the purpose of judging whether the object is focused or not by combining the characteristics of the face and the eyes is achieved, the problem that misjudgment or missed detection exists in the process of judging the focus of pedestrians only through the face angle is solved, and therefore whether the object is focused or not really, and the object focused recognition accuracy is effectively improved.

Description

Method, device, equipment and storage medium for identifying focused object
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular, to a method, an apparatus, a device, and a storage medium for identifying an object.
Background
With the continuous deepening of social informatization, people can see information delivery devices, such as electronic advertisement screens, for delivering various information (advertisements, notifications and the like) everywhere in life, and the information delivery devices can count the attention of pedestrians to the delivered information. The pedestrian attention calculating method is many, and the human face recognition technology is mainly used at present. Through setting up shooting device, such as the camera in advertisement screen one side, utilize the image that the camera gathered to carry out face detection, judge the people's face angle information of will obtaining with the threshold value comparison, confirm the pedestrian to advertisement screen's attention state, mark whether putting information is effective according to pedestrian's attention state.
However, the comparison result of the face angle information and the set threshold is used as a judgment basis for whether attention is paid, so that misjudgment or omission is generated on the attention of the pedestrians, the problem of inaccurate statistics of the attention of the pedestrians is caused, and further whether the input information is effectively calibrated is affected.
Disclosure of Invention
The application provides a method, a device, equipment and a storage medium for identifying an object to be focused, which are used for solving the problem of misjudgment or missed detection in the pedestrian focus statistics process and improving the identification accuracy of the object to be focused.
In a first aspect, an embodiment of the present application provides a method for identifying an object being focused on, including:
a method of identifying an object of interest, comprising:
acquiring an image to be recognized, which is shot from the direction of an object and contains a human face;
detecting a face yaw angle and a face pitch angle in the image to be identified, and detecting a left eye yaw angle, a left eye pitch angle, a right eye yaw angle and a right eye pitch angle of eyes in the image to be identified;
judging whether the face is effectively focused according to the face yaw angle and the face pitch angle, and obtaining a first judgment result;
judging whether the left eye is effectively concerned according to the left eye yaw angle and the left eye pitch angle, and obtaining a second judging result;
judging whether the right eye is effectively concerned according to the right eye yaw angle and the right eye pitch angle, and obtaining a third judging result;
and obtaining a detection result of whether the object is focused or not according to the first judgment result, the second judgment result and the third judgment result.
Optionally, determining whether the face is of effective interest according to the face yaw angle and the face pitch angle, and before obtaining the first determination result, further includes:
Detecting pupil image distance in the image to be identified;
calculating the pupil image distance, the preset pupil distance, the face yaw angle and shooting parameters to obtain the distance between the face and the object;
and determining that the distance between the face and the object is smaller than a preset distance.
Optionally, the obtaining a detection result of whether the object is focused according to the first determination result, the second determination result, and the third determination result includes:
if the first judgment result is that the face is effective attention, the second judgment result is that the left eye is effective attention, the third judgment result is that the right eye is effective attention, face voting scores are determined according to face confidence, left eye voting scores are determined according to left eye confidence, right eye voting scores are determined according to right eye confidence, and a detection result of whether the object is concerned or not is obtained according to the face voting scores, the left eye voting scores and the right eye voting scores;
if the first judgment result is that the face is effective attention, the second judgment result is that the left eye is effective attention, the third judgment result is that the right eye is not effective attention, face voting scores are determined according to the face confidence, left eye voting scores are determined according to the left eye confidence, and a detection result of whether the object is concerned or not is obtained according to the face voting scores and the left eye voting scores;
If the first judgment result is that the face is effective attention, the third judgment result is that the right eye is effective attention, the second judgment result is that the left eye is not effective attention, face voting score is determined according to the face confidence, right eye voting score is determined according to the right eye confidence, and a detection result of whether the object is concerned or not is obtained according to the face voting score and the right eye voting score;
if the first judgment result is that the face is not effectively focused, the second judgment result is that the left eye is effectively focused, the third judgment result is that the right eye is effectively focused, a left eye voting score is determined according to the left eye confidence, a right eye voting score is determined according to the right eye confidence, and a detection result of whether the object is focused is obtained according to the left eye voting score and the right eye voting score;
if the first judgment result is that the face is effective attention, the second judgment result is that the left eye is not effective attention, the third judgment result is that the right eye is not effective attention, face voting scores are determined according to the face confidence, and a detection result of whether the object is concerned or not is obtained according to the face voting scores;
If the first judgment result is that the face is not effectively focused, the second judgment result is that the left eye is effectively focused, the third judgment result is that the right eye is not effectively focused, a left eye voting score is determined according to the left eye confidence, and a detection result of whether the object is focused is obtained according to the left eye voting score;
if the first judgment result is that the face is not effectively focused, the second judgment result is that the left eye is not effectively focused, the third judgment result is that the right eye is effectively focused, a right eye voting score is determined according to the right eye confidence, and a detection result of whether the object is focused is obtained according to the right eye voting score;
if the first judgment result is that the face is not effectively focused, the second judgment result is that the left eye is not effectively focused, and the third judgment result is that the right eye is not effectively focused, so as to obtain a detection result that the object is not focused.
Optionally, the obtaining a detection result of whether the object is focused according to the face voting score, the left eye voting score and the right eye voting score includes:
Judging whether the face voting score is larger than a first preset score or not, wherein the left eye voting score is larger than the first preset score, and the right eye voting score is larger than the first preset score, if yes, obtaining a detection result that the object is concerned, otherwise, obtaining a detection result that the object is not concerned;
obtaining a detection result of whether the object is focused according to the face voting score and the left eye voting score, wherein the detection result comprises the following steps:
judging whether the face voting score is larger than a second preset score or not, and if yes, obtaining a detection result of the object which is concerned, otherwise, obtaining a detection result of the object which is not concerned;
the obtaining a detection result of whether the object is focused according to the face voting score and the right eye voting score comprises:
judging whether the face voting score is larger than a second preset score or not, if yes, obtaining a detection result of the object being concerned, otherwise, obtaining a detection result of the object not being concerned;
the obtaining a detection result of whether the object is focused according to the left eye voting score and the right eye voting score comprises the following steps:
Judging whether the left eye voting score is larger than a second preset score or not, and if yes, obtaining a detection result of the object which is concerned, otherwise, obtaining a detection result of the object which is not concerned;
the step of obtaining a detection result of whether the object is focused according to the face voting score comprises the following steps:
judging whether the face voting score is larger than a third preset score or not, if yes, obtaining a detection result of the object concerned, otherwise, obtaining a detection result of the object not concerned;
the obtaining a detection result of whether the object is focused according to the left eye voting score comprises the following steps:
judging whether the left eye voting score is larger than a third preset score or not, if yes, obtaining a detection result of the object concerned, otherwise, obtaining a detection result of the object not concerned;
the obtaining a detection result of whether the object is focused according to the right eye voting score comprises the following steps:
judging whether the right eye voting score is larger than the third preset score, if yes, obtaining a detection result of the object concerned, otherwise, obtaining a detection result of the object not concerned;
Wherein the third preset score is not less than the second preset score, and the second preset score is not less than the first preset score.
Optionally, the detecting a left eye yaw angle, a left eye pitch angle, a right eye yaw angle, and a right eye pitch angle of the human eye in the image to be identified includes:
identifying an eye image in the images to be identified;
performing line-of-sight detection on the human eye image to obtain a left eye iris characteristic point, a left eye eyeball characteristic point, a right eye iris characteristic point and a right eye eyeball characteristic point;
determining the left eye yaw angle and the left eye pitch angle according to the left eye iris characteristic points and the left eye eyeball characteristic points;
and determining the right eye yaw angle and the right eye pitch angle according to the right eye iris characteristic points and the right eye eyeball characteristic points.
Optionally, the determining whether the face is of effective interest according to the face yaw angle and the face pitch angle, and obtaining the first judgment result includes:
judging whether the face yaw angle is smaller than a preset face yaw angle upper limit threshold and larger than a preset face yaw angle lower limit threshold at the same time, wherein the face pitch angle is smaller than a preset face pitch angle upper limit threshold and larger than a preset face pitch angle lower limit threshold, if yes, obtaining a first judgment result to be that the human eyes pay effective attention, otherwise, obtaining a first judgment result to be that the human faces pay ineffective attention;
And judging whether the left eye is effectively concerned according to the left eye yaw angle and the left eye pitch angle, wherein the obtaining of a second judging result comprises the following steps:
judging whether the left eye yaw angle is smaller than a preset left eye yaw angle upper limit threshold and larger than a preset left eye yaw angle lower limit threshold at the same time, wherein the left eye pitch angle is smaller than a preset left eye pitch angle upper limit threshold and larger than a preset left eye pitch angle lower limit threshold, if yes, obtaining the second judgment result to be that the left eye is effective attention, otherwise, obtaining the second judgment result to be that the left eye is ineffective attention;
and judging whether the right eye is effectively concerned according to the right eye yaw angle and the right eye pitch angle, wherein the obtaining of a third judging result comprises the following steps:
judging whether the right eye yaw angle is smaller than a preset right eye yaw angle upper limit threshold and larger than a preset right eye yaw angle lower limit threshold, and if yes, obtaining a third judging result to be that the right eye is effective attention, otherwise, obtaining the third judging result to be that the right eye is ineffective attention.
Optionally, after the detecting the pupil distance in the image to be identified, before calculating the pupil distance, the preset pupil distance, the face yaw angle and the shooting parameters, the method includes:
Detecting the sex of the face in the image to be identified according to the preset sex characteristics of men and women;
determining a reference pupil distance according to the sex of the face in the image to be identified;
and determining the preset pupil distance according to the reference pupil distance.
In a second aspect, an embodiment of the present application provides an identifying apparatus in which an object is focused, including:
the first acquisition module is used for acquiring an image to be identified, which is shot from the direction of the object and contains a human face;
the detection module is used for detecting the yaw angle and the pitch angle of the human face in the image to be identified, and the yaw angle, the pitch angle, the yaw angle and the pitch angle of the right eye of the human eye in the image to be detected;
the first judging module is used for judging whether the face is of effective attention according to the face yaw angle and the face pitch angle, and obtaining a first judging result;
the second judging module is used for judging whether the left eye is effectively concerned according to the left eye yaw angle and the left eye pitch angle, and obtaining a second judging result;
the third judging module is used for judging whether the right eye is in effective focus according to the right eye yaw angle and the right eye pitch angle, and obtaining a third judging result;
And the second acquisition module is used for acquiring a detection result of whether the object is concerned or not according to the first judgment result, the second judgment result and the third judgment result.
In a third aspect, an embodiment of the present application provides an electronic device, including: the device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory is used for storing a computer program;
the processor is configured to execute the program stored in the memory, and implement the method for identifying the object being focused on as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program which, when executed by a processor, implements a method of identifying an object of interest as described in the first aspect above.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: according to the method, whether the face, the left eye and the right eye are effectively focused or not is judged respectively by detecting the face yaw angle and the face pitch angle in the image to be recognized and detecting the left eye yaw angle, the left eye pitch angle, the right eye yaw angle and the right eye pitch angle of the eyes in the image to be recognized, and the problems of misjudgment or missed detection existing due to inconsistent human eye gazing direction and human face orientation in the process of judging the degree of focus of the pedestrians only through the face angle are solved effectively, so that whether the object is actually focused or not can be determined more accurately, and the recognition accuracy of the object is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a flowchart of an object-focused recognition method according to a first embodiment of the present application;
fig. 2 is a schematic flow chart before obtaining a first judgment result according to a second embodiment of the present application;
fig. 3 is a schematic flow chart of detecting a left eye yaw angle, a left eye pitch angle, a right eye yaw angle and a right eye pitch angle of a human eye in an image to be identified according to a fourth embodiment of the present application;
FIG. 4 is a flowchart of a specific implementation method in which an object is focused on identification according to a fifth embodiment of the present application;
fig. 5 is a schematic structural diagram of an object-focused recognition device according to a sixth embodiment of the present application;
Fig. 6 is a schematic structural diagram of an electronic device according to a seventh embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present application based on the embodiments herein.
The embodiment of the application provides a method for identifying whether an object is focused or not, which can be directly integrated in an electronic device in the form of a software module, wherein the electronic device can be directly the object or other electronic devices which establish communication connection with the object, and the electronic device can be any type of terminal or server.
In the first embodiment of the present application, referring to fig. 1, a specific process for identifying whether an object is focused mainly includes:
s101, acquiring an image to be recognized, which is shot from the direction of the object and contains a human face.
The image to be identified can be obtained by shooting a camera which is positioned at the same position as the object from any direction. Or the image to be identified can be a camera which is not in the same position with the object, and the image to be identified can be obtained by shooting towards the direction of an extension line of a connecting line of the camera and the object.
In this embodiment, it is assumed that the object is a throwing device provided with a camera, that is, the camera and the throwing device are integrally designed, and a pedestrian is photographed from the direction of the throwing device through the camera, and the direction of the throwing device refers to any direction led out from the front of the throwing device.
S102, detecting a face yaw angle and a face pitch angle in an image to be identified, and detecting a left eye yaw angle, a left eye pitch angle, a right eye yaw angle and a right eye pitch angle of eyes in the image to be identified.
In this embodiment, the face yaw angle refers to an angle at which the center of the face deflects left and right with respect to the ground in a vertical direction, wherein the direction of face deflection is represented by positive and negative numbers, left deflection is represented by negative numbers, and right deflection is represented by positive numbers. The pitch angle of the face refers to an included angle formed by the center of the face relative to the horizontal direction of the ground, the face is upward in elevation angle relative to the horizontal direction of the ground, the positive number is used for representing, and the face is downward in depression angle relative to the horizontal direction of the ground, and the negative number is used for representing. The face center can be determined according to a face detection algorithm, and the face center position can be represented by a face chin center or a nose tip center.
The left-eye yaw angle refers to an angle by which the left-eye actual line of sight is deviated left and right with respect to the preset line of sight, wherein the left-eye actual line of sight is deviated left with respect to the preset line of sight by a negative number and the left-eye actual line of sight is deviated right with respect to the preset line of sight by a positive number. The preset sight line is the direction of the preset face gazing object.
The left eye pitch angle refers to the angle of the left eye's actual line of sight with respect to the ground level. The angle of the left eye's actual viewing direction above the ground level is the left eye elevation angle, represented by a positive number, and the angle of the left eye's actual viewing direction below the ground level is the left eye depression angle, represented by a negative number.
Also, the right-eye yaw angle and the left-eye yaw angle are similar to the meanings indicated in the present embodiment, and the right-eye pitch angle and the left-eye pitch angle are similar to the meanings indicated in the present embodiment, and the right-eye yaw angle refers to an angle by which the right-eye actual line-of-sight direction is deviated left and right with respect to the preset line-of-sight direction, wherein the right-eye actual line-of-sight is deviated left with respect to the preset line-of-sight by a negative number, and the right-eye actual line-of-sight is deviated right with respect to the preset line-of-sight by a positive number.
The right eye pitch angle refers to the angle of the right eye's actual line of sight with respect to the ground level. The angle of the right eye actual line of sight with respect to the ground level is the right eye elevation angle, and is represented by a positive number, and the angle of the right eye actual line of sight with respect to the ground level is represented by a right eye depression angle, and is represented by a negative number.
When the face detection is carried out on the image to be identified, key features such as the face, the left eye, the right eye and the like of the image to be identified are marked, the number of marking points can be multiple, the face center is determined according to each feature marking point of the face, the face yaw angle and the face pitch angle are determined according to the ground vertical direction and the ground horizontal direction, the left eye iris center and the left eye eyeball center are determined according to each feature marking point of the left eye, and therefore the left eye yaw angle and the left eye pitch angle are determined, the right eye iris center and the right eye eyeball center are determined according to each feature marking point of the right eye, and therefore the right eye yaw angle and the right eye pitch angle are determined.
And S103, judging whether the face is effectively focused according to the face yaw angle and the face pitch angle, and obtaining a first judgment result.
In this embodiment, whether the face is of effective interest may be determined by determining whether the face yaw angle is of effective interest, determining whether the face eye pitch angle is of effective interest, and comprehensively determining the determination result of the face yaw angle and the determination result of the face pitch angle. If the face yaw angle and the face pitch angle are judged to be effective, a first judgment result is obtained as follows: the face is effective attention, otherwise, the first judgment result is obtained as follows: face is an ineffective concern.
In addition to the above-described embodiment of obtaining the first determination result, the following embodiment may be used to obtain the first determination result: judging whether the face yaw angle is smaller than the upper limit threshold value of the preset face yaw angle and larger than the lower limit threshold value of the preset face yaw angle, and if so, obtaining a first judgment result that the human eyes pay attention to effectively, otherwise, obtaining a first judgment result that the human faces pay attention to ineffectively.
The formula for obtaining the first determination result may be expressed as:
the method comprises the steps that FaceLeftThreshold < FaceYawAngel < FaceRightThreshold and FaceToPThreshold < FacePictginsel < FaceBottomThreshold, if a condition is met, a first judgment result is that the face is effective attention, and otherwise, the first judgment result is that the face is ineffective attention.
The FaceLeftThreshold represents a preset face yaw angle lower limit threshold, faceYawAngel represents a face yaw angle, faceRightThreshold represents a preset face yaw angle upper limit threshold, faceToPThreshold represents a preset face pitch angle lower limit threshold, facePictgingel represents a face pitch angle, and FaceBottomThreshold represents a preset face pitch angle upper limit threshold.
And S104, judging whether the left eye is effectively concerned according to the yaw angle and the pitch angle of the left eye, and obtaining a second judging result.
In this embodiment, whether the left eye is effectively focused may be determined by determining whether the left eye yaw angle is effectively focused, determining whether the left eye pitch angle is effectively focused, and comprehensively determining the determination result of the left eye yaw angle and the determination result of the left eye pitch angle. If the left eye yaw angle and the left eye pitch angle are judged to be effective, a second judgment result is obtained as follows: the left eye is effective attention, otherwise, the second judgment result is obtained as follows: left eye is a null concern.
In addition to the above-described embodiment of obtaining the second determination result, the following embodiment may be employed to obtain the second determination result: and judging whether the yaw angle of the left eye is smaller than the upper limit threshold value of the yaw angle of the preset left eye and larger than the lower limit threshold value of the yaw angle of the preset left eye, wherein the pitch angle of the left eye is smaller than the upper limit threshold value of the pitch angle of the preset left eye and larger than the lower limit threshold value of the pitch angle of the preset left eye, if so, obtaining a second judgment result that the left eye is effective attention, and if not, obtaining a second judgment result that the left eye is ineffective attention.
The formula for obtaining the second determination result may be expressed as:
The lefteye is effective attention if the lefteye is left eye, and the left eye is ineffective attention if the condition is satisfied.
Wherein, leftEyeLeftThreshold represents a preset lower left eye yaw angle threshold, leftEyeYawAngel represents a left eye yaw angle, leftEyeRightThreshold represents a preset upper left eye yaw angle threshold, leftEyeToPThreshold represents a preset lower left eye pitch angle threshold, leftEyePichInget represents a left eye pitch angle, and leftEyeBottomeThreshold represents a preset upper left eye pitch angle threshold.
And S105, judging whether the right eye is effectively concerned according to the yaw angle and the pitch angle of the right eye, and obtaining a third judging result.
In this embodiment, whether the right eye is in effective focus may be determined by determining whether the right eye yaw angle is in effective focus, determining whether the right eye pitch angle is in effective focus, and comprehensively determining the determination result of the right eye yaw angle and the determination result of the right eye pitch angle. If the right eye yaw angle and the right eye pitch angle are judged to be effective, a third judgment result is obtained as follows: the right eye is effective attention, otherwise, the third judgment result is obtained as follows: right eye is not a significant concern.
In addition to the above-described embodiment in which the third determination result is obtained, the following embodiment may be employed to obtain the third determination result: and judging whether the right-eye yaw angle is smaller than a preset right-eye yaw angle upper limit threshold and larger than a preset right-eye yaw angle lower limit threshold, and if so, obtaining a third judgment result that the right eye is effectively concerned, otherwise, obtaining the third judgment result that the right eye is not effectively concerned.
The formula for obtaining the third determination result may be expressed as:
the third judgment result is obtained if the condition is satisfied by the RightEyeLeftThreshold < RightEyeYawAngel < RightEyeRightThreshold and the righteyetophreshold < RightEyePicthAngel < RightEyeBottomThreshold: the right eye is effective attention, otherwise, the third judgment result is obtained as follows: the right eye is an ineffective concern.
Wherein, rightEyeLeftThreshold represents a preset right eye yaw angle lower limit threshold, rightEyeYawAngel represents a right eye yaw angle, rightEyeRightThreshold represents a preset right eye yaw angle upper limit threshold, righteyetophreshold represents a right eye pitch angle lower limit threshold, rightEyePicthAngell represents a right eye pitch angle, and RightEyeBottomThreshold represents a preset right eye pitch angle upper limit threshold.
S106, according to the first judgment result, the second judgment result and the third judgment result, a detection result of whether the object is focused or not is obtained.
In this embodiment, if the first determination result is that the face is effectively focused, the second determination result is that the left eye is effectively focused, and the third determination result is that the right eye is effectively focused, a detection result that the object is focused is obtained. If one or two of the first judgment result, the second judgment result and the third judgment result are effective attention, obtaining the detection result of the object being concerned. If the first judgment result is that the human eyes are invalid attention, the second judgment result is that the left eyes are invalid attention, the third judgment result is that the right eyes are invalid attention, and a detection result that the object is not concerned is obtained.
Compared with the pedestrian attention statistic method based on face recognition in the prior art, the object attention recognition method provided by the embodiment of the invention can remarkably improve recognition accuracy.
Analysis shows that in the pedestrian attention statistic method based on face recognition, the size of the threshold angle has great influence on the detection and judgment result by comparing the recognized face angle with the set threshold angle, and a plurality of phenomena of missed detection and misjudgment exist.
Firstly, when the pedestrian looks at the advertisement screen or has a certain small included angle, but the human eyes do not pay attention to the advertisement screen all the time, if the pedestrian looks at the advertisement screen from side and chatts beside, the image detects the human face, and the human face is misjudged to be in an attention state. For example, the judgment threshold is set to 25 degrees, the detected face angle is smaller than the threshold of 25 degrees, the detected face angle is larger than the threshold of 25 degrees, and the detected face angle is in a non-attention state, and the state that the detected face angle is larger than 25 degrees but the eyes are focused on the screen is ignored.
Secondly, when the pedestrian side face passes through the advertisement screen and the attention is focused on the advertisement screen in a side view of human eyes, the pedestrian side face is not detected, and misjudgment is made as a non-focusing state. For example, setting the judgment threshold to 50 degrees, judging that the face angle is less than 50 degrees as the attention state, misjudging that the face angle is less than 50 degrees but actually passes through the screen without attention to the screen.
According to the recognition method for the object focused on, based on the judgment of the face angle, the angle judgment of the left eye and the right eye is combined to obtain the detection result of the focused object, so that the problem of misjudgment or missed detection in the process of judging whether a pedestrian focuses on the object through only the face angle is avoided, whether the object is focused on is judged based on whether the face angle, the left eye and the right eye are effective, whether the object is focused on is further judged, and accordingly whether the object is focused on can be more accurately determined, and recognition accuracy of the focused object is effectively improved.
In the second embodiment of the present application, before step S103 in the first embodiment, a specific implementation is proposed, as shown in fig. 2, in this specific implementation, whether a face is of effective interest is determined according to a face yaw angle and a face pitch angle, before a first determination result is obtained, a distance from the face to an object is identified, and under a condition that the distance is ensured to be smaller, a calculation whether a subsequent calculation of effective interest is performed, where the identification process of the smaller distance specifically includes:
s201, detecting pupil image distance in the image to be identified.
In this embodiment, the pupil distance refers to the distance between the left eye pupil and the right eye pupil in the image to be recognized. And detecting characteristic points of the left eye pupil and characteristic points of the right eye pupil in the image to be identified, and determining the pupil distance according to the characteristic points of the left eye pupil and the characteristic points of the right eye pupil, wherein the characteristic points can be positions of the centers of the pupils of the two eyes or can be distances from the outer edge of the pupil of one eye to the inner edge of the pupil of the other eye.
S202, calculating pupil image distance, preset pupil distance, face yaw angle and shooting parameters to obtain the distance between the face and the object.
In the present embodiment, the reference interpupillary distance includes a male interpupillary distance and a female interpupillary distance based on the difference of the male and female interpupillary distances. Therefore, after detecting the pupil distance in the image to be recognized, a preset pupil distance is determined according to the gender of the man and woman, and then the pupil distance, the preset pupil distance, the face yaw angle, and the photographing parameters are calculated.
The process for determining the preset interpupillary distance mainly comprises the following steps: and detecting the sex of the face in the image to be identified according to the sex characteristics of the male and female. And determining the reference interpupillary distance according to the gender of the face in the image to be identified. And determining a preset pupil distance according to the reference pupil distance.
Wherein, the sex characteristics of the male and female can be determined by judging whether the face has the characteristics of a beard or a laryngeal knot and the like. If the human face is detected to have the beard or the laryngeal knot, the human face is judged to be male, otherwise, the human face is judged to be female. The features of whether the face has a beard, a laryngeal knot or the like are judged to be listed features, and the features for judging the gender of the male and female are not unique among the preset gender features of the male and female, and are not limited herein.
Wherein the reference interpupillary distance includes a male interpupillary distance and a female interpupillary distance. And if the sex of the face in the image to be identified is a male, determining the reference pupil distance as the male, and if the sex of the face in the image to be identified is a female, determining the reference pupil distance as the female. Wherein, the interpupillary distance of the male is between 60 and 73 millimeters, and the interpupillary distance of the female is between 53 and 68 millimeters.
If the reference pupil distance is male, the male pupil distance is determined as the preset pupil distance, and if the reference pupil distance is female, the female pupil distance is determined as the preset pupil distance.
In this embodiment, the shooting parameters include a focal length of a lens of the shooting device. The distance between the face and the object can be calculated according to a formula, and the specific calculation process is as follows:
according to the optical imaging formula: 1/f=1/u+1/V, can be obtained: u=fv/(V-f).
Wherein f represents the focal length of the lens of the shooting device and is a known quantity; u represents the pupil object distance, namely the distance from the pupil to the optical center of the shooting device, and can be obtained according to the pupil image distance and the pixel number of the shooting device; v represents the pupil distance, i.e., the distance from the eye image including the pupil to the optical center of the photographing device, which is a known amount that can be obtained during detection.
According to the imaging similarity formula: S/w=v/U, which can be obtained: v=su/W.
Substituting the formula v=su/W into the formula u=fv/(V-f) can result in: u=f+fw/S. Wherein W represents a preset effective pupil distance, S is an eyeball radius imaging image height, s=the number of pixels×the pixel size, wherein the number of pixels is the number of pixels for distance imaging between two eyes, the pixel size is a fixed parameter of a camera sensor, and the pixel size is a known quantity. U represents the pupil object distance and V represents the pupil image distance.
When the face yaw angle exists, determining a preset effective pupil distance W length of a preset pupil distance in the direction perpendicular to the optical axis based on the face yaw angle according to an optical imaging similar formula, and expressing the length as follows:
W= pcos (FaceYawAngel) (1/2+U/(2 (u+ psin (FaceYawAngel)))), the pupil Kong Wuju can be calculated, where the object distance of the pupil is the actual distance between the pupil of the human eye and the object, i.e. the distance between the face and the object.
Wherein p represents a preset pupil distance, faceYawAngel represents a face yaw angle, and U represents a pupil object distance.
S203, determining that the distance between the face and the object is smaller than a preset distance.
The preset distance is the distance from the preset person to the object, and if the preset distance is 2 meters, the face is determined to be in effective attention or not when the distance between the face and the object is less than 2 meters.
If the distance between the face and the object is determined to be greater than the preset distance, judging whether the yaw angle of the face is smaller than the preset first upper limit threshold and greater than the preset first lower limit threshold, and if so, obtaining a detection result of the object being concerned, otherwise, obtaining a detection result of the object not being concerned.
According to the method for identifying the focused object, which is provided by the second embodiment of the application, the problem that in the prior art, when the distance between the face and the advertisement screen is relatively short, misjudgment or missed detection is easy to exist only through face angle detection judgment because the face direction is inconsistent with the eye gazing direction, for example, the face is faced to the screen but the actual eyes do not gaze at the screen, or the face is not faced to the screen but the eyes are gazed at the screen in practice is effectively solved. Under the condition that the distance between the face and the object is smaller than the preset distance, namely the distance between the camera and the pedestrian is short, a clearer image to be recognized with a large face proportion is obtained, whether the object is concerned or not is judged by detecting according to the face, the left eye and the right eye, and the accuracy of the detected data of the face yaw angle and the face pitch angle in the image to be recognized and the detected data of the left eye yaw angle, the left eye pitch angle, the right eye yaw angle and the right eye pitch angle of the human eye are higher by obtaining the clearer image to be recognized with the large face proportion, the clearer image to be recognized is obtained, so that the accuracy of the recognition result of the object to be concerned is further improved. And when the distance between the face and the object is smaller than the preset distance, the distance calculation is carried out according to the optical imaging formula and the pupil distance of the man and the woman in face recognition, so that the estimated distance is more accurate.
In the third embodiment of the present application, another specific implementation is proposed for step S106 in the first embodiment, where the concept of confidence is introduced.
The confidence coefficient represents the accuracy of detection, and the judgment of the confidence coefficient can be obtained by judging the image to be identified and a preset image model. If the face angle confidence is detected, comparing the face image information in the image to be recognized with the face image information in the preset image model, so as to determine the face confidence. The face image information may include angle information of a face image and quality information of the face image. The angle information of the face image in the image to be recognized is obtained according to a face detection algorithm, and the quality information of the face image of the image to be recognized is obtained according to the image quality obtained during shooting. The face image information in the preset image model is obtained by pre-storing a large number of face detection images acquired through experiments. For example, the face confidence may be 0.9, i.e., the face accuracy of the recognition is 0.9.
And detecting left eye confidence, and comparing left eye image information in the image to be identified with left eye image information in a preset image model, so as to determine the left eye confidence. The left-eye image information may include angle information of the left-eye image and quality information of the left-eye image. The angle information of the left eye image in the image to be identified is obtained according to a sight line detection algorithm, and the quality information of the left eye image of the image to be identified is obtained according to the image quality obtained during shooting. The left eye image information in the preset image model is obtained by pre-storing a large number of left eye detection images acquired through experiments. For example, the left eye confidence may be 0.9, i.e., the left eye accuracy of the recognition is 0.9.
The specific implementation process of detecting the right eye confidence coefficient can be referred to as implementation of detecting the left eye confidence coefficient. The higher the image quality, the higher the confidence. The confidence level detection is an existing algorithm in face detection, so the confidence level determination is not repeated here.
In this embodiment, after the first, second, and third determination results are obtained, there may be several cases in which the determination whether the object is focused on:
in the first case, if the first judgment result is that the face is effective attention, the second judgment result is that the left eye is effective attention, the third judgment result is that the right eye is effective attention, the face voting score is determined according to the face confidence, the left eye voting score is determined according to the left eye confidence, the right eye voting score is determined according to the right eye confidence, and the detection result of whether the object is concerned or not is obtained according to the face voting score, the left eye voting score and the right eye voting score.
Wherein, the detection result of whether the object is focused is obtained, at least one of the following two embodiments can be adopted:
in the mode a, if the first judgment result is that the face is effective attention, the second judgment result is that the left eye is effective attention, and the third judgment result is that the right eye is effective attention, the face confidence, the left eye confidence and the right eye confidence are correspondingly obtained, the face confidence is taken as the face voting score, the left eye confidence is taken as the left eye voting score, and the right eye confidence is taken as the right eye voting score. For example, a face confidence of 0.95, a left eye confidence of 0.9, a right eye confidence of 0.85, a face voting score of 0.95 determined from the face confidence of 0.95, a left eye voting score of 0.9 determined from the left eye confidence of 0.9, a right eye voting score of 0.85 determined from the right eye confidence, and whether the object is focused or not determined from the face voting score of 0.95, the left eye voting score of 0.9 and the right eye voting score of 0.85, thereby obtaining a detection result of whether the object is focused or not.
In the mode b, if the first judgment result is that the face is effective attention, the second judgment result is that the left eye is effective attention, the third judgment result is that the right eye is effective attention, the face effective attention is calculated by setting a numerical value to obtain a face count, and the product of the face confidence and the face count is calculated to obtain a face voting score, wherein the face voting score can be expressed as follows by a formula:
face vote score = face count x faceprefecteinfidentcore, where faceprefecteinfidentcore represents face confidence.
For example, the face confidence is 0.95, the set value of the effective attention count is set to 1, the effective attention count of the face is 1, and the face count 1 and the face confidence are multiplied by 0.95, so that the face voting score is 0.95.
Setting a numerical value for effective attention of the left eye to obtain a left eye count, calculating the product of the left eye confidence coefficient and the left eye count to obtain a left eye voting score, wherein the left eye voting score can be expressed as follows by a formula:
left eye vote score = left eye count x LeftEyeConfidenceScore, where LeftEyeConfidenceScore represents left eye confidence.
For example, the left-eye confidence is 0.9, the set value of the effective attention count is set to 1, the effective attention count for the left eye is 1, and the left-eye count 1 and the left-eye confidence are multiplied by 0.9, so that the left-eye voting score is 0.9.
Setting a numerical value for the effective attention of the right eye to obtain a right eye count, calculating the product of the right eye confidence coefficient and the right eye count to obtain a right eye voting score, wherein the right eye voting score can be expressed as follows by a formula:
right eye vote score = right eye count x RightEyeConfidenceScore, where RightEyeConfidenceScore represents right eye confidence.
For example, the right eye effective attention count is 1, the right eye count 1 is multiplied by the right eye confidence level 0.85 to obtain the right eye vote score 0.85, and the set value may be 2, 3, or 5, and the like, and is not limited to the value of 1.
And then obtaining the focused detection result according to the face voting score, the left eye voting score and the right eye voting score.
Specifically, in the mode a and the mode b, whether the face voting score is larger than the first preset score, the left eye voting score is larger than the second preset score, and the right eye voting score is larger than the third preset score is judged, if yes, a detection result that the object is concerned is obtained, and if not, a detection result that the object is not concerned is obtained.
The first preset score is obtained according to a first judgment result, a second judgment result and a third judgment result, if the face is effective attention, the second judgment result is effective attention for the left eye, the third judgment result is effective attention for the right eye, the first preset score can be set to a lower value, for example, can be set to 0.55. The first preset score is set to 0.55 as an enumerated value, or the first preset score may be set to 0.5,0.6, etc., and the specific value of the first preset score is not limited thereto, and may be set according to the actual situation.
For example, the face voting score is 0.95, the left eye voting score is 0.9, the right eye voting score is 0.85, the first preset score is set to 0.55, the face voting score is 0.95 and is larger than the first preset score 0.55, the left eye voting score is 0.9 and is larger than the first preset score 0.55, and the right eye voting score is 0.85 and is larger than the first preset score 0.55, so that a detection result that the object is focused is obtained.
If the face is effectively focused on the first judgment result, the second judgment result is effectively focused on the left eye, and the third judgment result is effectively focused on the right eye, at least one of the face voting score, the left eye voting score and the right eye voting score is smaller than the first preset score, and a detection result that the object is not focused on is obtained.
And if the first judgment result is that the face is effective attention, the second judgment result is that the left eye is effective attention, the third judgment result is that the right eye is not effective attention, the face voting score is determined according to the face confidence, the left eye voting score is determined according to the left eye confidence, and the detection result of whether the object is concerned or not is obtained according to the face voting score and the left eye voting score.
The specific implementation process of determining the face voting score according to the face confidence and determining the left eye voting score according to the left eye confidence can be referred to the implementation process of the face voting score and the left eye voting score in the first case, which will not be described in detail herein.
Specifically, obtaining a detection result of whether the object is focused on according to the face voting score and the left eye voting score includes: judging whether the face voting score is larger than a second preset score or not, and if yes, obtaining a detection result of the object being concerned, otherwise, obtaining a detection result of the object not being concerned.
The second preset score is slightly larger than the first preset score in value, and can be set to be 0.65, and likewise, the second preset score is set to be 0.65 as an enumerated value, can also be set to be 0.64, 0.7 and the like, and only needs to be not smaller than the first preset score, the value of the second preset score is not limited, and the second preset score can be set as required.
For example, a second preset score is set to be 0.65, a face voting score is set to be 0.95, a left eye voting score is set to be 0.9, and whether the face voting score of 0.95 is larger than the second preset score by 0.65 or the left eye voting score of 0.9 is larger than the second preset score is judged to be met, so that a detection result that the object is concerned is obtained. If the face is effective attention and the left eye is effective attention, at least one of the face voting score and the left eye voting score is smaller than a second preset score, and a detection result that the object is not focused is obtained.
And thirdly, if the first judgment result is that the face is effective attention, the third judgment result is that the right eye is effective attention, the second judgment result is that the left eye is not effective attention, the face voting score is determined according to the face confidence, the right eye voting score is determined according to the right eye confidence, and the detection result of whether the object is concerned or not is obtained according to the face voting score and the right eye voting score.
The specific implementation of determining the right eye voting score according to the right eye confidence may be referred to as the implementation of the right eye voting score in the first case, which will not be described in detail herein.
Specifically, obtaining a detection result of whether the object is focused according to the face voting score and the right eye voting score includes: judging whether the face voting score is larger than a second preset score or not and the right eye voting score is larger than the second preset score or not, if yes, obtaining a detection result of the object concerned, otherwise, obtaining a detection result of the object not concerned.
For example, when the face voting score is 0.95 and the right eye voting score is 0.85, setting the second preset score to be 0.65, and judging that the face voting score is 0.95 and is greater than the second preset score and the right eye voting score is greater than the second preset score, so as to obtain a detection result that the object is concerned. If the face is effective attention and the right eye is effective attention, at least one of the face voting score and the right eye voting score is smaller than a second preset score, and a detection result that the object is not focused is obtained.
And fourthly, if the first judgment result is that the face is not effectively focused, the second judgment result is that the left eye is effectively focused, the third judgment result is that the right eye is effectively focused, the left eye voting score is determined according to the left eye confidence, the right eye voting score is determined according to the right eye confidence, and a detection result of whether the object is focused or not is obtained according to the left eye voting score and the right eye voting score.
Specifically, obtaining a detection result of whether the object is focused on according to the left eye voting score and the right eye voting score includes: judging whether the left eye voting score is larger than a second preset score or not, and if yes, obtaining a detection result of the object being concerned, otherwise, obtaining a detection result of the object not being concerned.
For example, when the left eye voting score is 0.9 and the right eye voting score is 0.85, setting the second preset score to be 0.65, and judging that the condition that the left eye voting score 0.9 is greater than the second preset score by 0.65 and the right eye voting score is greater than the second preset score is satisfied, so as to obtain a detection result that the object is concerned. If the left eye is effective attention and the right eye is effective attention, at least one of the left eye voting score and the right eye voting score is smaller than a second preset score, and a detection result that the object is not focused is obtained.
And fifthly, if the first judgment result is that the face is effective attention, the second judgment result is that the left eye is not effective attention, the third judgment result is that the right eye is not effective attention, the face voting score is determined according to the face confidence, and a detection result of whether the object is concerned or not is obtained according to the face voting score.
Specifically, according to the face voting score, obtaining a detection result of whether the object is focused on includes: judging whether the face voting score is larger than a third preset score or not, if yes, obtaining a detection result of the object concerned, otherwise, obtaining a detection result of the object not concerned.
The third preset score is greater than the second preset score, for example, the third preset score is 0.8, 0.82, etc., and the third preset score is not limited as long as the third preset score is greater than the second preset score.
For example, the face voting score is 0.95, the third preset score is set to be 0.8, and the face voting score satisfying the requirement that the face voting score 0.95 is larger than the third preset score by 0.8 is judged to be met, so that the detection result that the object is concerned is obtained. If the face is effective attention, the face voting score is smaller than the third preset score, and a detection result that the object is not focused is obtained.
If the first judgment result is that the face is not effectively focused, the second judgment result is that the left eye is effectively focused, the third judgment result is that the right eye is not effectively focused, the left eye voting score is determined according to the left eye confidence, and a detection result of whether the object is focused is obtained according to the left eye voting score.
Specifically, according to the left eye voting score, a detection result of whether the object is focused on is obtained, including: judging whether the left eye voting score is larger than a third preset score, if yes, obtaining a detection result of the object concerned, otherwise, obtaining a detection result of the object not concerned.
For example, the left eye voting score is 0.9, the third preset score is set to be 0.8, and the condition that the left eye voting score 0.9 is larger than the third preset score by 0.8 is met is judged, so that the detection result of the object concerned is obtained. If the left eye is effective attention, the left eye voting score is smaller than the third preset score, and a detection result that the object is not focused is obtained.
And seventhly, if the first judgment result is that the face is not effectively focused, the second judgment result is that the left eye is not effectively focused, the third judgment result is that the right eye is effectively focused, the right eye voting score is determined according to the right eye confidence, and a detection result of whether the object is focused or not is obtained according to the right eye voting score.
Specifically, according to the right eye voting score, a detection result of whether the object is focused on is obtained, including: judging whether the right eye voting score is larger than a third preset score, if yes, obtaining a detection result of the object concerned, otherwise, obtaining a detection result of the object not concerned.
For example, the right eye voting score is 0.85, the third preset score is set to be 0.8, and the condition that the right eye voting score 0.85 is larger than the third preset score by 0.8 is met is judged, so that the detection result that the object is concerned is obtained. If the right eye is effective attention, the right eye voting score is smaller than the third preset score, and a detection result that the object is not focused is obtained.
In the eighth case, if the first judgment result is that the face is not effectively focused, the second judgment result is that the left eye is not effectively focused, and the third judgment result is that the right eye is not effectively focused, so as to obtain a detection result that the object is not focused.
In this embodiment, compared with the prior art, the problem that the accuracy of identifying the object to be focused is low due to the difference in quality of the photographed eye images caused by the difference in distance between the pedestrian and the throwing device when the pedestrian passes through the throwing device is solved. The confidence degree judgment related to the image quality is added on the basis of detecting whether the left eye and the right eye are effective attention, the confidence degree is used as a coefficient to obtain the voting score on the premise that the face and the left eye and the right eye are effective attention, and the detection result of whether the object is concerned or not is finally obtained by judging the voting score of the face and the left eye and the right eye, so that the false judgment of the object as the result of attention due to low image unclear confidence degree and the occurrence of actual lack of attention is avoided, and the recognition accuracy of the object is greatly improved.
In a fourth embodiment of the present application, for step S102 in the first embodiment, a specific implementation is proposed, and referring to fig. 3, in the specific implementation, detecting a left eye yaw angle, a left eye pitch angle, a right eye yaw angle, and a right eye pitch angle of a human eye in an image to be identified includes:
s301, detecting a human eye image in the image to be identified.
And detecting eye key point coordinates in the image to be identified, determining an original eye image area according to the eye key point coordinates, and amplifying the original eye image area and the eye key points to obtain an eye image area.
For example, an original horizontal coordinate point x, an original vertical coordinate point y, an original width and an original height of an eye image region Rectangle are calculated according to eye key point coordinates; the original abscissa and original ordinate are then expanded by a specified coefficient: for example, 2 times expansion, the abscissa X, the ordinate Y after 2 times expansion, and the relationship between the WIDTH and HEIGHT after expansion and the original coordinate point are expressed as follows:
X=x-width/2;Y=y-height;
WIDTH=width×2;HEIGHT=height×2。
checking whether the extended new coordinates X, Y, WIDTH, HEIGHT are out of range and adjusting appropriately includes: based on the eye image area, detecting whether the eyes exceed the boundary of the eye image area according to the coordinates of the eye key points, and if so, adjusting the eye image to enable the eyes to fall in the eye image area, thereby obtaining the eye image.
S302, performing line-of-sight detection on the human eye image to obtain a left eye iris characteristic point, a left eye eyeball characteristic point, a right eye iris characteristic point and a right eye eyeball characteristic point.
After detecting the human eye image in the image to be identified, before performing line-of-sight detection on the human eye image, performing image segmentation on the human eye image to obtain a left eye image and a right eye image.
And detecting the left eye image vision, namely detecting the characteristic points of the left eye iris and the characteristic points of the left eye eyeball. The left eye iris feature point may be a left eye iris center coordinate, and the left eye eyeball feature point may be a left eye eyeball center coordinate. The left eye iris center coordinate comprises a left eye iris center abscissa and a left eye iris center ordinate, and the left eye eyeball center coordinate comprises a left eye eyeball center abscissa and a left eye eyeball center ordinate.
And (3) performing line-of-sight detection on the right eye image, wherein the right eye iris characteristic point can be a right eye iris center coordinate, and the right eye eyeball characteristic point can be a right eye eyeball center coordinate. The right eye iris center coordinates comprise a right eye iris center abscissa and a right eye iris center ordinate, and the right eye eyeball center coordinates comprise a right eye eyeball center abscissa and a right eye eyeball center ordinate.
S303, determining a left eye yaw angle and a left eye pitch angle according to the left eye iris characteristic points and the left eye eyeball characteristic points.
The calculation of the yaw angle for the left eye from the left eye iris feature points and the left eye feature points may be determined according to the formula:
lefteyepicthangel=asin ((LeftEyeirisX-LefteyeballX)/eyeballradius).
Wherein, lefteye eye picthangel represents a left eye yaw angle, a represents a coefficient, lefteye irisx represents a left eye iris center abscissa, lefteye ballalx represents a left eye eyeball center abscissa, and eyeballradius represents an eyeball radius.
According to the characteristic points of the iris of the left eye and the characteristic points of the eyeball of the left eye, the left eye pitching angle calculation can be determined according to the formula:
LeftEyeYawAngel = asin ((LefteyeirisY-LefteyeballY)/eyeballradius).
Wherein, leftEyeYawAngel represents a left eye pitch angle, a represents a coefficient, lefteyeirisY represents a left eye iris center ordinate, lefteyeballY represents a left eye eyeball center ordinate, and eyeball represents an eyeball radius.
And performing line-of-sight detection on the left eye image to obtain a left eye yaw angle and a left eye pitch angle which are the existing detection algorithms, and are not described in detail herein.
S304, determining a right eye yaw angle and a right eye pitch angle according to the right eye iris characteristic points and the right eye eyeball characteristic points.
The determination of the right eye yaw angle from the right eye iris feature points and the right eye feature points may be calculated according to the formula:
righteyepicthangel=asin (righteyeirix-RighteyeballX)/eyeballradius).
Wherein, rightEyePicthAngel represents a right eye yaw angle, a represents a coefficient, righteyeirisX represents a right eye iris center abscissa, righteyeballX represents a right eye eyeball center abscissa, and eyeballradius represents an eyeball radius.
According to the characteristic points of the iris of the right eye and the characteristic points of the eyeball of the right eye, the calculation for determining the pitching angle of the right eye can be carried out according to the formula:
righteyyawangel = asin (RighteyeirisY-eyeballY)/righteyballradius).
Wherein, righteyyawangel represents the right eye pitch angle, a represents the coefficient, righteyeirisY represents the right eye iris center ordinate, righteyeballY represents the right eye eyeball center ordinate, and eyeballradius represents the eyeball radius.
The line of sight detection is performed on the right eye image, and the obtained right eye yaw angle and right eye pitch angle are the existing detection algorithm, and are not described herein.
In a fifth embodiment of the present application, a specific implementation of the object focused on identification is provided, and referring to fig. 4, the specific implementation may be referred to the description of the first to fourth embodiment sections above.
The specific process by which the object is focused on and identified is as follows:
s401, acquiring an image to be recognized containing a human face.
S402, detecting a face yaw angle and a face pitch angle in an image to be recognized, and detecting a left eye yaw angle, a left eye pitch angle, a right eye yaw angle and a right eye pitch angle of a face in the image to be recognized.
S403, detecting the pupil distance in the image to be identified; and calculating the pupil distance, the preset pupil distance, the face yaw angle and shooting parameters to obtain the distance between the face and the object.
S404, determining that the distance between the face and the object is smaller than a preset distance.
S405, judging whether the face is effectively concerned according to the face yaw angle and the face pitch angle, and obtaining a first judgment result;
judging whether the left eye is concerned effectively according to the yaw angle and the pitch angle of the left eye, and obtaining a second judging result;
and judging whether the right eye is effectively concerned according to the yaw angle and the pitch angle of the right eye, and obtaining a third judging result.
S406, according to the first judgment result, the second judgment result and the third judgment result, obtaining a detection result that the object is focused on, including: face voting scores are determined according to the face confidence, left eye voting scores are determined according to the left eye confidence, and right eye voting scores are determined according to the right eye confidence.
If the first judgment result is that the face is effective attention, the second judgment result is that the left eye is effective attention, the third judgment result is that the right eye is effective attention, the face voting score is judged to be larger than the first preset score, the left eye voting score is larger than the first preset score, and the right eye voting score is larger than the first preset score;
or if the first judgment result is that the face is effective attention, the second judgment result is that the left eye is effective attention, and the face voting score is larger than the second preset score and the left eye voting score is larger than the second preset score;
or if the first judgment result is that the face is effective attention, the third judgment result is that the right eye is effective attention, and the face voting score is larger than the second preset score and the right eye voting score is larger than the second preset score;
or if the second judgment result is that the left eye is effective attention, the third judgment result is that the right eye is effective attention, and the left eye voting score is judged to be larger than the second preset score and the right eye voting score is judged to be larger than the second preset score;
or if the first judgment result is that the face is effective attention, judging that the face voting score is larger than a third preset score;
or the second judgment result is that the left eye pays attention effectively, and the left eye voting score is judged to be larger than a third preset score;
Or, the third judgment result is that the right eye is effective attention, and the right eye voting score is judged to be larger than a third preset score. If yes, judging as paying attention, otherwise, judging as not paying attention.
S407, determining that the distance between the face and the object is not smaller than a preset distance, judging whether the yaw angle of the face is smaller than a preset first upper limit threshold and larger than a preset first lower limit threshold, and if so, judging that the object is concerned, and if not, judging that the object is not concerned.
The technical effects achieved by this embodiment can be seen in particular from the description of the technical effects of the first to fourth embodiments described above.
Based on the same concept, in a sixth embodiment of the present application, an apparatus for identifying an object to be focused is provided, and the specific implementation of the apparatus may be referred to the description of the embodiment of the method, and the details will not be repeated, and the apparatus mainly includes:
the first obtaining module 501 is configured to obtain an image to be identified, which is obtained by capturing from a direction of an object and includes a face.
The detection module 502 is configured to detect a face yaw angle and a face pitch angle in an image to be identified, and a left eye yaw angle, a left eye pitch angle, a right eye yaw angle and a right eye pitch angle of a human eye in the image to be detected.
The first judging module 503 is configured to judge whether the face is of effective interest according to the face yaw angle and the face pitch angle, and obtain a first judging result.
And a second judging module 504, configured to judge whether the left eye is effectively focused according to the yaw angle and the pitch angle of the left eye, so as to obtain a second judging result.
And a third judging module 505, configured to judge whether the right eye is effectively focused according to the yaw angle and the pitch angle of the right eye, so as to obtain a third judging result.
The second obtaining module 506 is configured to obtain a detection result of whether the object is focused according to the first determination result, the second determination result, and the third determination result.
According to the method and the device, the face yaw angle and the face pitch angle in the image to be recognized are detected through the detection module, the left eye yaw angle, the left eye pitch angle, the right eye yaw angle and the right eye pitch angle of the human eye in the image to be recognized are detected, the face yaw angle, the face pitch angle, the left eye yaw angle, the left eye pitch angle, the right eye yaw angle and the right eye pitch angle are synthesized through the first judgment module, the second judgment module and the third judgment module, whether the face, the left eye and the right eye are effectively focused or not is judged respectively, the second acquisition module acquires whether the object is focused or not based on the result of judging whether the face, the left eye and the right eye are effectively focused or not, the purpose that whether the object is focused or not is judged by combining the characteristics of the face and the eyes is achieved, the problem that misjudgment or omission exists in the process of judging the focus of pedestrians only through the face angle is solved, and therefore whether the object is really focused or not can be accurately determined, and the recognition accuracy of the object is effectively improved.
Based on the same concept, there is also provided an electronic device in a seventh embodiment of the present application, as shown in fig. 6, the electronic device mainly including: processor 601, communication interface 602, memory 603 and communication bus 604, wherein processor 601, communication interface 602 and memory 603 accomplish each other's communication through communication bus 604. In which a program executable by the processor 601 is stored in the memory 603, the processor 601 executes the program stored in the memory 603, realizing the steps of the recognition method in which the object is focused as described in the above first to sixth embodiments.
The communication bus 604 mentioned in the above electronic device may be a peripheral component interconnect standard (Peripheral Component Interconnect, abbreviated to PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated to EISA) bus, or the like. The communication bus 604 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 6, but not only one bus or one type of bus.
The communication interface 602 is used for communication between the electronic device and other devices described above.
The memory 603 may include random access memory (Random Access Memory, simply RAM) or may include non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. Alternatively, the memory may be at least one memory device located remotely from the aforementioned processor 601.
The processor 601 may be a general-purpose processor including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), a digital signal processor (Digital Signal Processing, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA), or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components.
In a ninth embodiment of the present application, there is also provided a computer-readable storage medium having stored therein a computer program which, when run on a computer, causes the computer to perform the steps of the identification method of an object of interest described in the above embodiments.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the processes or functions described in accordance with the embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, by a wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, microwave, etc.) means from one website, computer, server, or data center to another. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape, etc.), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a specific embodiment of the application to enable one skilled in the art to understand or practice the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A method of identifying an object of interest, comprising:
acquiring an image to be recognized, which is shot from the direction of an object and contains a human face;
detecting a face yaw angle and a face pitch angle in the image to be identified, and detecting a left eye yaw angle, a left eye pitch angle, a right eye yaw angle and a right eye pitch angle of eyes in the image to be identified;
judging whether the face is of effective interest or not according to the face yaw angle and the face pitch angle, and obtaining a first judgment result, wherein the first judgment result comprises the following steps: judging whether the face yaw angle is smaller than a preset face yaw angle upper limit threshold and larger than a preset face yaw angle lower limit threshold at the same time, wherein the face pitch angle is smaller than a preset face pitch angle upper limit threshold and larger than a preset face pitch angle lower limit threshold, if yes, obtaining the first judgment result to be that the face is effective attention, otherwise, obtaining the first judgment result to be that the face is ineffective attention;
judging whether the left eye is effectively concerned according to the left eye yaw angle and the left eye pitch angle, and obtaining a second judging result, wherein the second judging result comprises the following steps: judging whether the left eye yaw angle is smaller than a preset left eye yaw angle upper limit threshold and larger than a preset left eye yaw angle lower limit threshold at the same time, wherein the left eye pitch angle is smaller than a preset left eye pitch angle upper limit threshold and larger than a preset left eye pitch angle lower limit threshold, if yes, obtaining the second judgment result to be that the left eye is effective attention, otherwise, obtaining the second judgment result to be that the left eye is ineffective attention;
Judging whether the right eye is effectively concerned according to the right eye yaw angle and the right eye pitch angle, and obtaining a third judging result, wherein the third judging result comprises the following steps: judging whether the right eye yaw angle is smaller than a preset right eye yaw angle upper limit threshold and larger than a preset right eye yaw angle lower limit threshold at the same time, wherein the right eye pitch angle is smaller than a preset right eye pitch angle upper limit threshold and larger than a preset right eye pitch angle lower limit threshold, if yes, obtaining a third judging result to be that the right eye is effective attention, otherwise, obtaining a third judging result to be that the right eye is ineffective attention;
and obtaining a detection result of whether the object is focused or not according to the first judgment result, the second judgment result and the third judgment result.
2. The method according to claim 1, wherein the step of determining whether the face is of effective interest based on the face yaw angle and the face pitch angle, and before obtaining the first determination result, further comprises:
detecting pupil image distance in the image to be identified;
calculating the pupil image distance, the preset pupil distance, the face yaw angle and shooting parameters to obtain the distance between the face and the object;
And determining that the distance between the face and the object is smaller than a preset distance.
3. The method according to claim 1, wherein the obtaining a detection result of whether the object is focused on based on the first determination result, the second determination result, and the third determination result includes:
if the first judgment result is that the face is effective attention, the second judgment result is that the left eye is effective attention, the third judgment result is that the right eye is effective attention, face voting scores are determined according to face confidence, left eye voting scores are determined according to left eye confidence, right eye voting scores are determined according to right eye confidence, and a detection result of whether the object is concerned or not is obtained according to the face voting scores, the left eye voting scores and the right eye voting scores;
if the first judgment result is that the face is effective attention, the second judgment result is that the left eye is effective attention, the third judgment result is that the right eye is not effective attention, face voting scores are determined according to the face confidence, left eye voting scores are determined according to the left eye confidence, and a detection result of whether the object is concerned or not is obtained according to the face voting scores and the left eye voting scores;
If the first judgment result is that the face is effective attention, the third judgment result is that the right eye is effective attention, the second judgment result is that the left eye is not effective attention, face voting score is determined according to the face confidence, right eye voting score is determined according to the right eye confidence, and a detection result of whether the object is concerned or not is obtained according to the face voting score and the right eye voting score;
if the first judgment result is that the face is not effectively focused, the second judgment result is that the left eye is effectively focused, the third judgment result is that the right eye is effectively focused, a left eye voting score is determined according to the left eye confidence, a right eye voting score is determined according to the right eye confidence, and a detection result of whether the object is focused is obtained according to the left eye voting score and the right eye voting score;
if the first judgment result is that the face is effective attention, the second judgment result is that the left eye is not effective attention, the third judgment result is that the right eye is not effective attention, face voting scores are determined according to the face confidence, and a detection result of whether the object is concerned or not is obtained according to the face voting scores;
If the first judgment result is that the face is not effectively focused, the second judgment result is that the left eye is effectively focused, the third judgment result is that the right eye is not effectively focused, a left eye voting score is determined according to the left eye confidence, and a detection result of whether the object is focused is obtained according to the left eye voting score;
if the first judgment result is that the face is not effectively focused, the second judgment result is that the left eye is not effectively focused, the third judgment result is that the right eye is effectively focused, a right eye voting score is determined according to the right eye confidence, and a detection result of whether the object is focused is obtained according to the right eye voting score;
if the first judgment result is that the face is not effectively focused, the second judgment result is that the left eye is not effectively focused, and the third judgment result is that the right eye is not effectively focused, so as to obtain a detection result that the object is not focused.
4. A method of identifying an object as being focused on as claimed in claim 3, wherein said obtaining a detection result of whether the object is focused on based on the face vote score, the left eye vote score, and the right eye vote score comprises:
Judging whether the face voting score is larger than a first preset score or not, wherein the left eye voting score is larger than the first preset score, and the right eye voting score is larger than the first preset score, if yes, obtaining a detection result that the object is concerned, otherwise, obtaining a detection result that the object is not concerned;
obtaining a detection result of whether the object is focused according to the face voting score and the left eye voting score, wherein the detection result comprises the following steps:
judging whether the face voting score is larger than a second preset score or not, and if yes, obtaining a detection result of the object which is concerned, otherwise, obtaining a detection result of the object which is not concerned;
the obtaining a detection result of whether the object is focused according to the face voting score and the right eye voting score comprises:
judging whether the face voting score is larger than a second preset score or not, if yes, obtaining a detection result of the object being concerned, otherwise, obtaining a detection result of the object not being concerned;
the obtaining a detection result of whether the object is focused according to the left eye voting score and the right eye voting score comprises the following steps:
Judging whether the left eye voting score is larger than a second preset score or not, and if yes, obtaining a detection result of the object which is concerned, otherwise, obtaining a detection result of the object which is not concerned;
the step of obtaining a detection result of whether the object is focused according to the face voting score comprises the following steps:
judging whether the face voting score is larger than a third preset score or not, if yes, obtaining a detection result of the object concerned, otherwise, obtaining a detection result of the object not concerned;
the obtaining a detection result of whether the object is focused according to the left eye voting score comprises the following steps:
judging whether the left eye voting score is larger than a third preset score or not, if yes, obtaining a detection result of the object concerned, otherwise, obtaining a detection result of the object not concerned;
the obtaining a detection result of whether the object is focused according to the right eye voting score comprises the following steps:
judging whether the right eye voting score is larger than the third preset score, if yes, obtaining a detection result of the object concerned, otherwise, obtaining a detection result of the object not concerned;
Wherein the third preset score is not less than the second preset score, and the second preset score is not less than the first preset score.
5. The method of identifying an object of interest according to any one of claims 1 to 4, wherein detecting left eye yaw, left eye pitch, right eye yaw and right eye pitch of the human eye in the image to be identified comprises:
identifying an eye image in the images to be identified;
performing line-of-sight detection on the human eye image to obtain a left eye iris characteristic point, a left eye eyeball characteristic point, a right eye iris characteristic point and a right eye eyeball characteristic point;
determining the left eye yaw angle and the left eye pitch angle according to the left eye iris characteristic points and the left eye eyeball characteristic points;
and determining the right eye yaw angle and the right eye pitch angle according to the right eye iris characteristic points and the right eye eyeball characteristic points.
6. The method of claim 2, wherein after the detecting the pupil distance in the image to be recognized, calculating the pupil distance, the preset pupil distance, the face yaw angle, and the photographing parameters includes:
Detecting the sex of the face in the image to be identified according to the preset sex characteristics of men and women;
determining a reference pupil distance according to the sex of the face in the image to be identified;
and determining the preset pupil distance according to the reference pupil distance.
7. An apparatus for identifying an object of interest, comprising:
the first acquisition module is used for acquiring an image to be identified, which is shot from the direction of the object and contains a human face;
the detection module is used for detecting the yaw angle and the pitch angle of the human face in the image to be identified, and the yaw angle, the pitch angle, the yaw angle and the pitch angle of the right eye of the human eye in the image to be identified;
the first judging module is configured to judge whether the face is of effective interest according to the face yaw angle and the face pitch angle, and obtain a first judging result, where the first judging result includes: judging whether the face yaw angle is smaller than a preset face yaw angle upper limit threshold and larger than a preset face yaw angle lower limit threshold at the same time, wherein the face pitch angle is smaller than a preset face pitch angle upper limit threshold and larger than a preset face pitch angle lower limit threshold, if yes, obtaining the first judgment result to be that the face is effective attention, otherwise, obtaining the first judgment result to be that the face is ineffective attention;
The second judging module is configured to judge whether the left eye is effectively focused according to the yaw angle and the pitch angle of the left eye, and obtain a second judging result, where the second judging result includes: judging whether the left eye yaw angle is smaller than a preset left eye yaw angle upper limit threshold and larger than a preset left eye yaw angle lower limit threshold at the same time, wherein the left eye pitch angle is smaller than a preset left eye pitch angle upper limit threshold and larger than a preset left eye pitch angle lower limit threshold, if yes, obtaining the second judgment result to be that the left eye is effective attention, otherwise, obtaining the second judgment result to be that the left eye is ineffective attention;
the third judging module is configured to judge whether the right eye is actively focused according to the yaw angle and the pitch angle of the right eye, and obtain a third judging result, where the third judging result includes: judging whether the right eye yaw angle is smaller than a preset right eye yaw angle upper limit threshold and larger than a preset right eye yaw angle lower limit threshold at the same time, wherein the right eye pitch angle is smaller than a preset right eye pitch angle upper limit threshold and larger than a preset right eye pitch angle lower limit threshold, if yes, obtaining a third judging result to be that the right eye is effective attention, otherwise, obtaining a third judging result to be that the right eye is ineffective attention;
And the second acquisition module is used for acquiring a detection result of whether the object is concerned or not according to the first judgment result, the second judgment result and the third judgment result.
8. An electronic device, comprising: the device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory is used for storing a computer program;
the processor being configured to execute a program stored in the memory to implement the method of identifying an object of interest according to any one of claims 1-6.
9. A computer readable storage medium storing a computer program, which when executed by a processor implements a method of identifying an object of interest according to any one of claims 1-6.
CN202010582728.5A 2020-06-23 2020-06-23 Method, device, equipment and storage medium for identifying focused object Active CN111767821B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010582728.5A CN111767821B (en) 2020-06-23 2020-06-23 Method, device, equipment and storage medium for identifying focused object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010582728.5A CN111767821B (en) 2020-06-23 2020-06-23 Method, device, equipment and storage medium for identifying focused object

Publications (2)

Publication Number Publication Date
CN111767821A CN111767821A (en) 2020-10-13
CN111767821B true CN111767821B (en) 2024-04-09

Family

ID=72722810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010582728.5A Active CN111767821B (en) 2020-06-23 2020-06-23 Method, device, equipment and storage medium for identifying focused object

Country Status (1)

Country Link
CN (1) CN111767821B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106598221A (en) * 2016-11-17 2017-04-26 电子科技大学 Eye key point detection-based 3D sight line direction estimation method
US9940518B1 (en) * 2017-09-11 2018-04-10 Tobii Ab Reliability of gaze tracking data for left and right eye
CN110046546A (en) * 2019-03-05 2019-07-23 成都旷视金智科技有限公司 A kind of adaptive line of sight method for tracing, device, system and storage medium
JP2020048971A (en) * 2018-09-27 2020-04-02 アイシン精機株式会社 Eyeball information estimation device, eyeball information estimation method, and eyeball information estimation program
CN111046744A (en) * 2019-11-21 2020-04-21 深圳云天励飞技术有限公司 Method and device for detecting attention area, readable storage medium and terminal equipment
CN111291737A (en) * 2020-05-09 2020-06-16 支付宝(杭州)信息技术有限公司 Face image acquisition method and device and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10254831B2 (en) * 2014-04-08 2019-04-09 Umoove Services Ltd. System and method for detecting a gaze of a viewer
JP2016173313A (en) * 2015-03-17 2016-09-29 国立大学法人鳥取大学 Visual line direction estimation system, visual line direction estimation method and visual line direction estimation program
JP7146585B2 (en) * 2018-11-13 2022-10-04 本田技研工業株式会社 Line-of-sight detection device, program, and line-of-sight detection method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106598221A (en) * 2016-11-17 2017-04-26 电子科技大学 Eye key point detection-based 3D sight line direction estimation method
US9940518B1 (en) * 2017-09-11 2018-04-10 Tobii Ab Reliability of gaze tracking data for left and right eye
JP2020048971A (en) * 2018-09-27 2020-04-02 アイシン精機株式会社 Eyeball information estimation device, eyeball information estimation method, and eyeball information estimation program
CN110046546A (en) * 2019-03-05 2019-07-23 成都旷视金智科技有限公司 A kind of adaptive line of sight method for tracing, device, system and storage medium
CN111046744A (en) * 2019-11-21 2020-04-21 深圳云天励飞技术有限公司 Method and device for detecting attention area, readable storage medium and terminal equipment
CN111291737A (en) * 2020-05-09 2020-06-16 支付宝(杭州)信息技术有限公司 Face image acquisition method and device and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于RGBD的实时头部姿态估计;陈国军;杨静;程琰;尹鹏;图学学报;第04卷(第40期);全文 *
驾驶人注意力分散的图像检测与分级预警;程文冬;付锐;袁伟;刘卓凡;张名芳;刘通;;计算机辅助设计与图形学学报(08);全文 *

Also Published As

Publication number Publication date
CN111767821A (en) 2020-10-13

Similar Documents

Publication Publication Date Title
CN110738142B (en) Method, system and storage medium for adaptively improving face image acquisition
CN109389135B (en) Image screening method and device
US20220148331A1 (en) Decreasing lighting-induced false facial recognition
WO2020037932A1 (en) Image quality assessment method, apparatus, electronic device and computer readable storage medium
CN111767820A (en) Method, device, equipment and storage medium for identifying object concerned
US9762788B2 (en) Image pickup apparatus, depth information acquisition method and program
WO2020181872A1 (en) Object detection method and apparatus, and electronic device
CN107564020B (en) Image area determination method and device
CN112489140A (en) Attitude measurement method
US10642353B2 (en) Non-transitory computer-readable storage medium, information processing apparatus, and information processing method
CN112597785A (en) Method and system for guiding image acquisition of target object
CN105180802B (en) A kind of dimension of object information identifying method and device
CN109753886B (en) Face image evaluation method, device and equipment
CN110211021B (en) Image processing apparatus, image processing method, and storage medium
JP2014061085A (en) Method fo detecting ellipse approximating to pupil portion
CN111767821B (en) Method, device, equipment and storage medium for identifying focused object
CN111738241B (en) Pupil detection method and device based on double cameras
CN111814659B (en) Living body detection method and system
CN111753796A (en) Method and device for identifying key points in image, electronic equipment and storage medium
JP2009059165A (en) Outline detection apparatus, sight line detection apparatus using the same, program for causing computer to remove false outline data, program for causing computer to detect sight line direction, and computer-readable recording medium with the program recorded
JP7040627B2 (en) Calculator, information processing method and program
CN113436120A (en) Image fuzzy value identification method and device
CN112598610A (en) Depth image obtaining method and device, electronic equipment and storage medium
CN112232121A (en) Living body detection method, apparatus, device, and medium
WO2022183536A1 (en) Intelligent toilet having health detection function and health detection method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 221, 2 / F, block C, 18 Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Technology Holding Co.,Ltd.

Address before: Room 221, 2 / F, block C, 18 Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant before: Jingdong Digital Technology Holding Co.,Ltd.

Address after: Room 221, 2 / F, block C, 18 Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Digital Technology Holding Co.,Ltd.

Address before: Room 221, 2 / F, block C, 18 Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant before: JINGDONG DIGITAL TECHNOLOGY HOLDINGS Co.,Ltd.

GR01 Patent grant