CN113971220A - Method for identifying attention based on human eyes and multimedia terminal - Google Patents

Method for identifying attention based on human eyes and multimedia terminal Download PDF

Info

Publication number
CN113971220A
CN113971220A CN202111248038.7A CN202111248038A CN113971220A CN 113971220 A CN113971220 A CN 113971220A CN 202111248038 A CN202111248038 A CN 202111248038A CN 113971220 A CN113971220 A CN 113971220A
Authority
CN
China
Prior art keywords
face
camera
data
coordinate
multimedia terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111248038.7A
Other languages
Chinese (zh)
Inventor
徐子乔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Xinchao Media Group Co Ltd
Original Assignee
Chengdu Baixin Zhilian Technology Co ltd
Chengdu Xinchao Media Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Baixin Zhilian Technology Co ltd, Chengdu Xinchao Media Group Co Ltd filed Critical Chengdu Baixin Zhilian Technology Co ltd
Priority to CN202111248038.7A priority Critical patent/CN113971220A/en
Publication of CN113971220A publication Critical patent/CN113971220A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • G06F16/436Filtering based on additional data, e.g. user or group profiles using biological or physiological data of a human being, e.g. blood pressure, facial expression, gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/44Browsing; Visualisation therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Molecular Biology (AREA)
  • Physiology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for identifying attention based on human eyes and a multimedia terminal, wherein the method comprises the steps of establishing a camera callback interface, realizing help in a camera, and exposing preview data and related abnormity during the execution of a camera; the method comprises the steps of creating a camera instance in a multimedia terminal, calling a camera callback interface, and monitoring a video preview stream returned by a camera; drawing a face frame, acquiring eyeball data parameters through a face acquisition SDK, and monitoring the rotation range, the interpupillary distance and the corrected interpupillary distance data of the face; and judging the attention degree in the multimedia terminal, and recording data when a user watches a screen. The invention can realize the purpose of capturing the sight range and angle of human eyes to judge whether a user watches the multimedia product terminal. A large database is established by acquiring the watching preference of a user, and accurate advertisement or video delivery is realized. The method can effectively solve the problem of the efficiency of face recognition by monitoring the real-time preview flow of the camera to capture the face frame, and does not need the user to stay in front of the camera for a long time.

Description

Method for identifying attention based on human eyes and multimedia terminal
Technical Field
The invention relates to the technical field of face recognition, in particular to a method for recognizing attention based on human eyes and a multimedia terminal.
Background
With the advent of the AI and big data era, face recognition has become a necessary area for technical development to overcome and implement. He mainly refers to: and carrying out wide identification on the shot area through a camera carried by the equipment or externally added by the equipment, finding out the face data matched with the database from the shot area, and recording and detecting the face data.
The face recognition technology is developed to the present, whether convenience stores or superstores are mostly provided with terminal equipment loaded with face recognition, some terminal equipment are used for scanning face payment functions and some terminal equipment are used for strengthening security measures, the living habits of people are changed by the equipment, the equipment is supported by the gradually plump face recognition technology, and multimedia terminal face recognition attention equipment is also integrated face recognition technology which is pertinently improved and developed on the basis of face recognition.
Most face recognition technologies in the market at present are developed by taking face collection, living body detection, face comparison and a face library as cores, and the face key points and face pictures are collected and then compared with the face library of the user, so that the face recognition effect is realized. There are many difficulties and bottlenecks in face recognition, such as ambient brightness during face recognition, face position, and accuracy of face capturing speed. The face recognition equipment needs the face to be static for a certain time before the camera and then can be completely recognized, so that the recognition efficiency is not high.
Disclosure of Invention
The invention provides a method for identifying attention based on human eyes and a multimedia terminal, which aim to solve the technical problem.
The technical scheme adopted by the invention is as follows: the method for identifying the attention based on the human eyes comprises the following steps:
creating a camera callback interface, realizing help classes in the camera, and exposing preview data and related exceptions when the camera is executed;
the method comprises the steps of creating a camera instance in a multimedia terminal, calling a camera callback interface, and monitoring a video preview stream returned by a camera;
drawing a face frame, acquiring eyeball data parameters through a face acquisition SDK, and monitoring the rotation range, the interpupillary distance and the corrected interpupillary distance data of the face;
and judging the attention degree in the multimedia terminal, and recording data when a user watches a screen.
As a preferred mode of the method for identifying attention based on human eyes, the method for drawing a human face frame, acquiring eyeball data parameters through a human face acquisition SDK, and monitoring the rotation range, the interpupillary distance and the corrected interpupillary distance data of the human face comprises the following steps:
setting a face detection parameter when the attention equipment is initialized;
the camera stream acquires a face and then compares the face with the database, and after the comparison is finished, the face information is scanned and uploaded;
after various parameters of the eyeballs are obtained, angle filtering is carried out, and finally face data recognized by the camera are returned;
after the face data are obtained, firstly, the interpupillary distance of eyeballs is calculated, and the interpupillary distance of the face is calculated by using the x coordinate and the y coordinate of the center points of the left eye and the right eye through an interpupillary distance formula sqrt ((x2-x1). pow (2) + (y2-y1). pow (2)); where x1 is the x coordinate of the left eye midpoint, x 2: x-coordinate of right eye midpoint, y 1: y-coordinate of left-eye midpoint, y 2: the y-coordinate of the right eye midpoint; pow (2) represents the mathematical square, sqrt () represents the mathematical square;
then using a correction pupil distance formula (ipd/cos (abs (yaw)) Math. PI/180)), calculating the correction pupil distance of the eyeball according to the x coordinate, the y coordinate and the face offset of the midpoint of the left eye and the right eye; wherein, Ipd is the pupil distance of human eyes calculated according to the pupil distance formula, abs (yaw) is the absolute value of the human face rotational offset, yaw is the human face rotational offset, Math.PI is the angle of the circumferential ratio pi;
and matching the pupil distance with a database after the pupil distance is taken, finding out all the close pupil distance data, finding out the most matched data according to the nearest pupil distance, and finally judging whether the target data is in the range of the matched data.
As a preferable mode of the method for recognizing the attention based on the human eye, the performing of the filtering of the angle includes:
judging whether the left and right deflection angles of the face are within the initialization parameters, judging whether the head rotation angle in the parallel plane of the face exceeds the limit, judging whether the upper and lower deflection angles of the face exceed the limit, and filtering the exceeding part.
As a preferable mode of the method for recognizing the attention based on the human eye, the face detection parameters include: minimum face number, blur, occlusion, and illumination quality detection standard.
The invention also discloses a multimedia terminal which judges whether the user is watching the multimedia product terminal by the method for identifying the attention degree based on the human eyes.
The invention has the beneficial effects that: the invention can realize the purpose of capturing the sight range and angle of human eyes to judge whether a user watches the multimedia product terminal. A large database is established by acquiring the watching preference of a user, and accurate advertisement or video delivery is realized. The method can effectively solve the problem of the efficiency of face recognition by monitoring the real-time preview flow of the camera to capture the face frame, and does not need the user to stay in front of the camera for a long time. The method comprises the steps of acquiring the interpupillary distance of a user after capturing a face, correcting the interpupillary distance, and comparing data in a database to detect whether the user is watching, so that the monitoring of the user attention is realized.
Drawings
Fig. 1 is a flowchart illustrating an execution procedure of a method for identifying attention based on human eyes according to the present invention.
Fig. 2 is a flow chart of a method for recognizing attention based on human eyes disclosed by the invention.
Fig. 3 is a detailed flowchart of S3 in the method for recognizing attention based on human eyes according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail below with reference to the accompanying drawings, but embodiments of the present invention are not limited thereto.
Abbreviations and key term definitions:
the multimedia terminal is developed based on an Android system.
Face recognition attention: on the basis of face recognition, whether the eyeball sight range is in the interval concerned by the equipment is detected.
The ladder inner screen: multimedia terminal equipment inside the elevator.
Attention device: face identification camera.
Example 1:
referring to fig. 1-2, the present embodiment discloses a method for identifying attention based on human eyes, which develops a human face identification attention and non-perception face identification technology for detecting an eyeball sight range based on a human face acquisition and living body detection technology. In the test stage of the project, the attention equipment collects eyeball data of a user watching the multimedia terminal at different angles through a large number of recognition tests, and then fine adjustment of data results is carried out through a dynamically distributed algorithm model. After actual pupil distance data of a user are obtained, finding out all close pupil distances in a database and then matching, finding out the most matched data according to the most matched pupil distance, and detecting whether the user is watching a product terminal in front of a camera by realizing methods such as camera angle, eyeball capture, eyeball watching range library and the like.
The method specifically comprises the following steps:
s1: and (3) creating a camera callback interface, realizing help classes in the camera, and exposing preview data and related exceptions when the camera is executed.
The attention device needs to adjust parameters internally, and the function needs to call a camera to realize a help class and perform deployment in mips of the multimedia terminal. Such as: whether a mirror image function is started or not, whether a front camera is started or not, preview size setting, terminal screen adaptation, focusing mode setting and the like.
S2: the method comprises the steps of creating a camera instance in the multimedia terminal, calling a camera callback interface, and monitoring a video preview stream returned by the camera.
The attention equipment cannot independently solve the problems of information collection, face drawing and the like, so that a multimedia terminal is required to participate, and a preview stream is processed in a camera callback interface. The multimedia terminal needs to acquire the camera instance to perform the next operation.
S3: drawing a face frame, acquiring eyeball data parameters through a face acquisition SDK, and monitoring data such as a face rotation range, an interpupillary distance, a correction interpupillary distance and the like.
After monitoring the camera preview callback, the terminal will start to judge the face frame logic, whether the face frame is displayed or not, and whether the position of human eyes is drawn or not. And carrying out code operation according to actual requirements.
S4: and judging the attention degree in the multimedia terminal, and recording data when a user watches a screen.
And finally, comparing information such as the detected positions of human eyes, the pupil distance and the corrected pupil distance with data in a database, judging whether the user is watching the screen or not, and recording the current time stamp or advertisement data if the user is watching the screen.
Further, referring to fig. 3, S3 specifically includes the following steps:
s301: setting face detection parameters when the attention device is initialized, wherein the face detection parameters comprise: and (5) quality detection standards such as minimum face number, blur, shielding, illumination and the like.
S302: and the camera stream acquires the face and then compares the face with the database, and after the comparison is finished, the face information is scanned and uploaded.
S303: after various parameters of the eyeballs are obtained, angle filtering is carried out, and finally face data recognized by the camera are returned.
Performing angular filtering includes: judging whether the left and right deflection angles of the face are within the initialization parameters, judging whether the head rotation angle in the parallel plane of the face exceeds the limit, judging whether the upper and lower deflection angles of the face exceed the limit, and filtering the exceeding part.
S304: after the face data is obtained, the interpupillary distance of the eyeball is firstly calculated, and the interpupillary distance of the face is calculated by using the x coordinate and the y coordinate of the center points of the left eye and the right eye through the interpupillary distance formula sqrt ((x2-x1). pow (2) + (y2-y1). pow (2)). Where x1 is the x coordinate of the left eye midpoint, x 2: x-coordinate of right eye midpoint, y 1: y-coordinate of left-eye midpoint, y 2: the y-coordinate of the right eye midpoint; pow (2) represents the mathematical square and sqrt () represents the mathematical square.
S305: then, using a correction pupil distance formula (ipd/cos (abs (raw)) math.pi/180)), the correction pupil distance of the eyeball is calculated by the x-coordinate, y-coordinate of the midpoint between the left and right eyes and the face offset. Wherein, Ipd is the pupil distance of human eyes calculated according to the pupil distance formula, abs (yaw) is the absolute value of the human face rotation offset, yaw is the human face rotation offset, and Math.PI is the angle of the circumferential ratio pi.
S306: and matching the pupil distance with a database after the pupil distance is taken, finding out all the close pupil distance data, finding out the most matched data according to the nearest pupil distance, and finally judging whether the target data is in the range of the matched data.
Example 2
The present embodiment discloses a multimedia terminal, which determines whether a user is watching a multimedia product terminal by the method of identifying attention based on human eyes as described in embodiment 1.
The application scenarios of the invention are as follows: the face attention device is carried on the inner screen of the elevator, so that the attention degree of a user to advertisement putting can be effectively detected, and an exclusive advertisement putting mechanism of each user is generated by combining a timestamp of playing advertisements on the screen, so that a foundation is laid for big data. The face attention device can judge whether the user is watching the multimedia product terminal by capturing the sight range and the angle of eyeballs on the face.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (5)

1. A method for identifying attention based on human eyes is characterized by comprising the following steps:
creating a camera callback interface, realizing help classes in the camera, and exposing preview data and related exceptions when the camera is executed;
the method comprises the steps of creating a camera instance in a multimedia terminal, calling a camera callback interface, and monitoring a video preview stream returned by a camera;
drawing a face frame, acquiring eyeball data parameters through a face acquisition SDK, and monitoring the rotation range, the interpupillary distance and the corrected interpupillary distance data of the face;
and judging the attention degree in the multimedia terminal, and recording data when a user watches a screen.
2. The method for identifying attention based on human eyes as claimed in claim 1, wherein the method for drawing a human face frame, acquiring eyeball data parameters through a human face acquisition SDK, monitoring the rotation range of the human face, the interpupillary distance and correcting the interpupillary distance data comprises the following steps:
setting a face detection parameter when the attention equipment is initialized;
the camera stream acquires a face and then compares the face with the database, and after the comparison is finished, the face information is scanned and uploaded;
after various parameters of the eyeballs are obtained, angle filtering is carried out, and finally face data recognized by the camera are returned;
after the face data are obtained, firstly, the interpupillary distance of eyeballs is calculated, and the interpupillary distance of the face is calculated by using the x coordinate and the y coordinate of the center points of the left eye and the right eye through an interpupillary distance formula sqrt ((x2-x1). pow (2) + (y2-y1). pow (2)); where x1 is the x coordinate of the left eye midpoint, x 2: x-coordinate of right eye midpoint, y 1: y-coordinate of left-eye midpoint, y 2: the y-coordinate of the right eye midpoint; pow (2) represents the mathematical square, sqrt () represents the mathematical square;
then using a correction pupil distance formula (ipd/cos (abs (yaw)) Math. PI/180)), calculating the correction pupil distance of the eyeball according to the x coordinate, the y coordinate and the face offset of the midpoint of the left eye and the right eye; wherein, Ipd is the pupil distance of human eyes calculated according to the pupil distance formula, abs (yaw) is the absolute value of the human face rotational offset, yaw is the human face rotational offset, Math.PI is the angle of the circumferential ratio pi;
and matching the pupil distance with a database after the pupil distance is taken, finding out all the close pupil distance data, finding out the most matched data according to the nearest pupil distance, and finally judging whether the target data is in the range of the matched data.
3. The method for identifying attention based on human eye according to claim 2, wherein the filtering of the angle comprises:
judging whether the left and right deflection angles of the face are within the initialization parameters, judging whether the head rotation angle in the parallel plane of the face exceeds the limit, judging whether the upper and lower deflection angles of the face exceed the limit, and filtering the exceeding part.
4. The method of claim 2, wherein the face detection parameters comprise: minimum face number, blur, occlusion, and illumination quality detection standard.
5. A multimedia terminal, characterized in that the multimedia terminal determines whether a user is watching a multimedia product terminal by the method of recognizing attention based on human eyes as claimed in any one of claims 1 to 4.
CN202111248038.7A 2021-10-26 2021-10-26 Method for identifying attention based on human eyes and multimedia terminal Pending CN113971220A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111248038.7A CN113971220A (en) 2021-10-26 2021-10-26 Method for identifying attention based on human eyes and multimedia terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111248038.7A CN113971220A (en) 2021-10-26 2021-10-26 Method for identifying attention based on human eyes and multimedia terminal

Publications (1)

Publication Number Publication Date
CN113971220A true CN113971220A (en) 2022-01-25

Family

ID=79588367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111248038.7A Pending CN113971220A (en) 2021-10-26 2021-10-26 Method for identifying attention based on human eyes and multimedia terminal

Country Status (1)

Country Link
CN (1) CN113971220A (en)

Similar Documents

Publication Publication Date Title
US7308120B2 (en) Identification of facial image with high accuracy
JP4377472B2 (en) Face image processing device
US8116517B2 (en) Action analysis apparatus
CN112272292B (en) Projection correction method, apparatus and storage medium
JP2000113199A (en) Method for deciding eye position
JP2007042072A (en) Tracking apparatus
TW201520807A (en) Head mounted display apparatus and login method thereof
US10987198B2 (en) Image simulation method for orthodontics and image simulation device thereof
KR20060119968A (en) Apparatus and method for feature recognition
US11232584B2 (en) Line-of-sight estimation device, line-of-sight estimation method, and program recording medium
JP2006350578A (en) Image analysis device
CN112115886A (en) Image detection method and related device, equipment and storage medium
US7377650B2 (en) Projection of synthetic information
WO2017057631A1 (en) Viewer emotion determination apparatus that eliminates influence of brightness, breathing, and pulse, viewer emotion determination system, and program
JP2003150942A (en) Eye position tracing method
CN109410138B (en) Method, device and system for modifying double chin
WO2008132741A2 (en) Apparatus and method for tracking human objects and determining attention metrics
CN112541400A (en) Behavior recognition method and device based on sight estimation, electronic equipment and storage medium
CN109194952B (en) Head-mounted eye movement tracking device and eye movement tracking method thereof
CN114219868A (en) Skin care scheme recommendation method and system
CN112257507A (en) Method and device for judging distance and human face validity based on human face interpupillary distance
Göcke et al. Automatic extraction of lip feature points
CN113971220A (en) Method for identifying attention based on human eyes and multimedia terminal
CN116363725A (en) Portrait tracking method and system for display device, display device and storage medium
CN115601316A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230220

Address after: 610000 High-tech Zone, Chengdu City, Sichuan Province, No. 99, No. 1, No. 2, No. 15, No. 1, No. 1505, No. 1, No. 1, No. 1, No. 1, No. 1, No. 1, No. 1, No. 1, No. 1, No

Applicant after: CHENGDU XINCHAO MEDIA GROUP Co.,Ltd.

Address before: No.1505, 15th floor, unit 2, building 1, No.99, Jinhui West 1st Street, Chengdu hi tech Zone, 610000, Sichuan Province

Applicant before: CHENGDU XINCHAO MEDIA GROUP Co.,Ltd.

Applicant before: Chengdu Baixin Zhilian Technology Co.,Ltd.

TA01 Transfer of patent application right