CN110476141A - Sight tracing and user terminal for executing this method - Google Patents
Sight tracing and user terminal for executing this method Download PDFInfo
- Publication number
- CN110476141A CN110476141A CN201880023479.7A CN201880023479A CN110476141A CN 110476141 A CN110476141 A CN 110476141A CN 201880023479 A CN201880023479 A CN 201880023479A CN 110476141 A CN110476141 A CN 110476141A
- Authority
- CN
- China
- Prior art keywords
- learning data
- user
- point
- person
- sight
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/0093—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0251—Targeted advertisements
- G06Q30/0261—Targeted advertisements based on user location
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0251—Targeted advertisements
- G06Q30/0269—Targeted advertisements based on user profile or attribute
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/383—Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Abstract
The present invention provides a kind of sight tracing and the user terminal for executing this method.The user terminal that one embodiment of the invention is related to comprising: filming apparatus is used to shoot the facial image of user;And eye tracking unit, its rule based on setting obtains the vector for the direction of face for indicating the user and the pupil image of the user from the facial image, and the sight of the user is tracked by facial image described in the deep learning mode input to setting, the vector and the pupil image.
Description
Technical field
The present invention relates to visual trace technologies.
Background technique
Eye tracking (Eye Tracking) tracks the technology of the position of sight as the ocular movemeut by perception user,
Image analysing computer mode, contact lens mode, the methods of attached sensors mode can be used.Image analysing computer mode by analyzing in real time
Camera images and the activity for detecting pupil, and it is set to the fixed bit reflected on cornea the direction of benchmark sight.Contact
Lens mode utilizes the reflected light of the contact lens of built-in mirror or the magnetic field etc. of the contact lens of internal coil, convenience
Property decline, but accuracy is high.Attached sensors mode is brought by the attached sensors around eyes using based on eye activity
Electric field variation perception eyeball activity, when closing eyes, (sleep etc.) can also detect ocular movemeut.
Recently, it is being gradually expanded using the equipment of visual trace technology and application field, thus in such as smart phone etc.
It is gradually increased during advertising service is provided in terminal using the trial of the visual trace technology.But in order to effectively
Advertising service is provided, needs to improve the accuracy of eye tracking, it is necessary to effectively combine the binding based on audience rating of advertisement
(bidding) mode, reward (reward) mode etc..
Existing technical literature
Patent document
Korean Patent Publication No. 10-1479471 (2015.01.13)
Summary of the invention
Problems to be solved by the invention
The embodiment of the present invention can be improved when being for providing a kind of eye tracking based on deep learning model sight with
The means of the accuracy of track.
Technical teaching for solving the problem was
Embodiment illustrated according to the present invention provides a kind of user terminal comprising: filming apparatus is used for for shooting
The facial image at family;And eye tracking unit, the rule (rule) based on setting obtain instruction from the facial image
The pupil image of the vector of the direction of the face of the user and the user pass through the deep learning mode input institute to setting
State the sight that facial image, the vector and the pupil image track the user.
The user terminal may also include learning data collector unit, connect when from the person of watching attentively for watching the point set in screen attentively
When receiving the movement of setting, the learning data collector unit collects learning data, and the learning data is included in described in reception
The location information of the point of the facial image and setting of the person of watching attentively of the time point shooting of movement, the eye tracking list
Member makes learning data described in the deep learning model learning, and using the deep learning after the study learning data
The sight of user described in model following.
When the person of watching attentively touches at described, the learning data collector unit can be at the time point that the touching occurs
Collect the learning data.
The learning data collector unit drives the filming apparatus simultaneously at the time point that the person of watching attentively touches the point
Collect the learning data.
The study that the learning data collector unit can collect the time point that the point is touched in the person of watching attentively
Data are sent to server.
The learning data collector unit is in the state that the filming apparatus is run, when the person of watching attentively touches the point
When, the time interval that the time point of the touching occurs and is set before and after the time point of the touching occurs when
Between point can collect the learning data respectively.
The visible elements of the point can be changed in the learning data collector unit after the person of watching attentively touches the point
So that the sight of the person of watching attentively remains in the point after the touching.
The learning data collector unit shows the sentence of setting on the point, and when the person of watching attentively makes a sound
When, the learning data is collected in the start time point of sounding.
The eye tracking unit is sat based on the pupil position that the rule obtains the user from the facial image
Mark and face location coordinate, and the vector of the direction together with the face for indicating the user, can also be by the pupil position
Coordinate and the face location coordinate input the deep learning model.
It further include the content providing unit for presenting advertising content on the screen, the eye tracking unit is based on
It is described to judge whether the user watches attentively for the position of the ad content in the sight of the user detected and the screen
Ad content, the content providing unit is by considering that the position of the ad content and the user watch institute attentively in the screen
The time of ad content is stated, so that the position of the ad content in the screen can be changed.
Another embodiment illustrated according to the present invention, provides a kind of sight tracing comprising: in filming apparatus,
The step of shooting the facial image of user;In eye tracking unit, the rule (rule) based on setting is from the facial image
The step of pupil image of the middle vector for obtaining the direction of face for indicating the user and the user;And in the sight
In tracking cell, by facial image described in the deep learning mode input to setting, the vector and the pupil image come
The step of tracking the sight of the user.
The sight tracing further include: in learning data collector unit, when from watching the point set in screen attentively
When the person of watching attentively receives the movement of setting, collection includes the people of the person of watching attentively described in the time point shooting for receiving the movement
The step of learning data of the location information of face image and the point of the setting;And in the eye tracking unit, make institute
The step of the step of stating learning data described in deep learning model learning, the sight of the tracking user, utilizes the study study
The sight of the user can be traced in the deep learning model of data.
The step of collection learning data, the touching occurred for the collector unit when the person of watching attentively touches at described
Time point collect the learning data.
Described the step of collecting learning data, drives the filming apparatus at the time point that the person of watching attentively touches the point
And collect the learning data.
The sight tracing may additionally include in the learning data collector unit, described in the person of watching attentively touching
The step of learning data of collection is sent server by the time point of point.
The step of collection learning data the filming apparatus run in the state of, when the person of watching attentively touching described in
When point, the time interval that the time point of the touching occurs and is set before and after the touching time point occurs when
Between point can collect the learning data respectively.
The sight tracing may additionally include in the learning data collector unit the person of watching attentively touching described in
Change the visible elements of the point after point so that the sight of the person of watching attentively remains in the point after the touching
Step.
Described the step of collecting learning data, shows the sentence of setting on the point, and when the person of watching attentively makes a sound
When, the learning data is collected in the start time point of sounding.
The sight tracing further includes in the eye tracking unit based on described regular from the facial image
The step of middle pupil position coordinate and face location coordinate for obtaining the user;It is described tracking user sight the step of together with
Indicate that the vector of the direction of the face of the user together, can also be defeated by the pupil position coordinate and the face location coordinate
Enter the deep learning model.
The sight tracing may also include that in content providing unit the step of presenting advertising content on the screen
Suddenly;In the eye tracking unit, tracking cell is described wide in sight and the screen based on the user detected
The position for accusing content, judges the step of whether user watches the ad content attentively;And in the content providing unit,
By considering that the position of the ad content and the user watch the time of the ad content attentively in the screen, to change
In the screen the step of position of the ad content.
Invention effect
According to an embodiment of the invention, by when based on the eye tracking of deep learning model by the face figure of user
As the vector of, pupil image and the direction for the face for indicating user is used as the input data of deep learning model, thus
The accuracy of eye tracking can more be improved.
In addition, according to an embodiment of the invention, when receive by watch attentively the point set in screen the person of watching attentively input as touching
Touch, the movement such as sound when, will be in the facial image of the person of watching attentively for the time point shooting for receiving the action input and the point
Location information as eye tracking deep learning model learning data and use, so as to more improve sight with
The accuracy and reliability of track.
Detailed description of the invention
Fig. 1 is the block diagram constituted in detail for illustrating the ad system that one embodiment of the invention is related to.
Fig. 2 is the block diagram constituted in detail for illustrating the terminal that one embodiment of the invention is related to.
Fig. 3 is the process for illustrating to track the sight of user in eye tracking unit that one embodiment of the invention is related to
Figure.
Fig. 4 is the example for the face vector that one embodiment of the invention is related to.
Fig. 5 is showing for the process for the sight by deep learning model following user that diagram one embodiment of the invention is related to
Example.
Fig. 6 is for collecting in learning data collector unit of illustrating that one embodiment of the invention is related to for depth
Practise the example of the process of the learning data of mode input.
Fig. 7 is for collecting in learning data collector unit of illustrating that one embodiment of the invention is related to for depth
Practise another example of the process of the learning data of mode input.
Fig. 8 is the process for illustrating to change in Fig. 7 when the person of watching attentively touches the point of setting the visible elements of the point
Example.
Fig. 9 is collecting in learning data collector unit for deep learning mould of illustrating that one embodiment of the invention is related to
Another example of the process of the learning data of type input.
Figure 10 is the example for illustrating the binding mode based on sight that one embodiment of the invention is related to.
Figure 11 is the flow chart for illustrating sight tracing that one embodiment of the invention is related to.
Figure 12 is intended to illustrate the box of the calculating environment including computer apparatus applicable in embodiment illustrated
Figure.
Specific embodiment
Hereinafter, being illustrated referring to attached drawing to specific implementation form of the invention.Following detailed description is to facilitate
It is apparent from the method, apparatus recorded in this specification and/or system and provides.But this is only limitted to illustrate and is not
For limiting the present invention.
During illustrating the embodiment of the present invention, if it is considered to being illustrated to the related well-known technique of the present invention
Purport of the invention can unnecessarily be obscured, then description is omitted.In addition, aftermentioned term allows in the present invention
Function and the term that defines, can be different according to user, the intention of network operator or convention etc..Therefore this definition should be based on
This specification in the whole text in content and determine.Term used in detailed description is only intended to describe the embodiment of the present invention, and
It is not intended to limit the present invention.As without clearly different uses, singular references include the meaning of plural form.In this specification,
Such as " comprising " or " having " statement it will be appreciated that be for point out a certain feature, numerical value, step, movement, element,
Part of it or a combination thereof, not for one or other features more than it, number pointed out other than excluding to describe in advance
The presence of value, step, movement, element, part of it or a combination thereof or its additional function.
Fig. 1 is the block diagram constituted in detail for illustrating the ad system 100 that one embodiment of the invention is related to.As shown in Figure 1,
The ad system 100 that one embodiment of the invention is related to includes user terminal 102, server 104, advertisement master terminal 106 and content
Developer terminal 108.
Equipment of the user terminal 102 as being possessed by user and for receiving the various advertising services provided, such as can
To be the mobile device of such as smart phone, tablet computer, laptop.Only, the type of user terminal 102 is not limited to
This, there are the various communication equipments of screen and user for presenting advertising content for the filming apparatus of shooting can belong to this hair
The user terminal 102 that bright embodiment is related to.
User terminal 102 can have screen, can presenting advertising content by the screen.In addition, user terminal 102 can have
Just like the filming apparatus of camera, video camera etc., the facial image based on the user shot by the filming apparatus is traceable
The sight of the user.As a result, in sight and the screen of the user terminal 102 based on the user detected ad content position
It sets, can determine whether the user watches ad content attentively.At this point, user terminal 102 receives the shifting for the setting that server 104 provides
Dynamic application program can be linked with the screen, filming apparatus in user terminal 102 etc. by the application program and execute offer
Above-mentioned ad content, eye tracking function etc..
In addition, algorithm and deep learning (deep of the user terminal 102 using rule-based (rule-based) of setting
Learning) sight of user can be traced in model.Here, rule-based algorithm utilizes preset image processing skill as a kind of
The algorithm that the acquisitions such as art, image processing techniques, mathematical formulae are used for the various data of eye tracking and use, such as can be people
Face recognizer is (for example, principal component analysis (PCA:Principal Component Analysis), linear discriminant analysis
(LDA:Linear Discriminant Analysis) etc.), the detection algorithm of face feature is (for example, SVM (Support
Vector Machine, supporting vector machine), SURF (Speeded Up Robust Features, accelerate robust feature) etc.),
Head-tracking (head-tracking) algorithm, pupil extraction and the computational algorithm of pupil position coordinate based on image etc..In addition,
Deep learning model for example can be convolutional neural networks (CNN:Convolutional Neural Network) model.
Server 104 between user terminal 102 and advertisement master terminal 106 in be diverted to provide advertising service various numbers
According to.As shown in Figure 1, server 104 can be separately connected user terminal 102, advertisement master terminal 106 and interior by network (not shown)
Hold developer terminal 108.Server 104 is provided for the shifting of advertising service to terminal 102 according to the request of terminal 102
Dynamic application program.Terminal 102 is contacted by the mobile applications with server 104, server 104 is provided various wide
The service of announcement provides a user.In addition, server 104 is connect by linking with advertisement master terminal 106 from content development person's terminal 108
Ad content is received, and it is provided to terminal 102.Then, server 104 can collect the advertisement with ad content from terminal 102
The relevant various data of effect are (for example, according to display time/number of different ad contents, according to the note of different ad contents
Depending on time/number etc.), and provide it to advertisement master terminal 106.
The terminal that advertisement master terminal 106 is possessed as advertiser can be connect by network with server 104.Advertiser
At least one selected in multiple ad contents that content developer terminal 108 provides, advertisement master terminal 106 can will be with institutes
The related information of the ad content of selection is provided to server 104.In addition, advertisement master terminal 106 can receive the offer of server 104
Various data related with the advertising results of ad content.
The terminal that content development person's terminal 108 is possessed as the developer of exploitation ad content, can be with clothes by network
Business device 104 connects.Content development person's terminal 108 can by content development person's production/editor ad content by server 104 to
Advertisement master terminal 106 provides.Server 104 receives the letter related with ad content that advertiser selects from advertisement master terminal 106
Breath, and the ad content of the correspondence information can be provided to user terminal 102.
Fig. 2 is the block diagram constituted in detail for illustrating the terminal 102 that one embodiment of the invention is related to.As shown in Fig. 2, this hair
The terminal 102 that a bright embodiment is related to includes content providing unit 202, filming apparatus 204, eye tracking unit 206 and study
Data collection module 208.
Content providing unit 202 is used for the presenting advertising content on the screen of user terminal 102.As an example, content mentions
For unit 202 can on screen locking screen presenting advertising content.The screen locking screen refers to that user terminal 102 is being converted to screen locking shape
In the case where state, the screen that shows when receiving the touching that user inputs to release the screen lock state.Content provides single
Member 202 can on the screen locking screen display text, image or video modality ad content.As another example, content is mentioned
It, can be according to the execution for unit 202 when the order of application program, the menu for executing setting for receiving user's input etc.
Order presenting advertising content on the screen.Only, the example of the screen of presenting advertising content is without being limited thereto, and the ad content can
It is shown on the screen of preset various forms.
Filming apparatus 204 is as the device for shooting the user for watching 102 screen of user terminal attentively, such as can be photograph
Machine, video camera etc..Filming apparatus 204 can for example be located at before user terminal 102.User terminal 102 passes through filming apparatus 204
The sight of user can be traced by the facial image for the facial image for obtaining user.
Eye tracking unit 206 is used to track the sight of user.Eye tracking unit 206 is rule-based using setting
The sight of user can be traced in algorithm and deep learning model.In the present embodiment, deep learning is as the neural network for utilizing the mankind
One kind of the artificial neural network (ANN, Artificial Neural Network) of (Neural Network) theory, refers to
Be made of layer structure (Layer Structure), input layer (Input layer) and output layer (Output layer) it
Between the deep-neural-network (DNN:Deep Neural Network) with more than one hidden layer (Hidden layer)
The set of rote learning (Machine Learning) model or algorithm.At this point, eye tracking unit 206 can be with filming apparatus
204 link and track the sight of user.
As an example, when filming apparatus 204 perceives the face of user, eye tracking unit 206 utilizes the base
The sight of user can be traced in the algorithm and deep learning model of rule.As another example, when filming apparatus 204 does not perceive use
When the face at family, eye tracking unit 206 is worked with sleep pattern, and is stopped various for tracking the movement of the sight.
If the acquisition of eye tracking unit 206 passes through filming apparatus when filming apparatus 204 perceives the face of user
The facial image of the user of 204 shootings, the rule based on setting can obtain the face of instruction user from the facial image
The vector of direction and the pupil image of user.Then, eye tracking unit 206 is described by inputting to deep learning model 210
The sight of the user can be traced in facial image, the vector and the pupil image.In this, it is assumed that the deep learning model
Learn an adequate amount of learning data collected by learning data collector unit 208 in advance.In addition, eye tracking unit 206 is based on
The rule obtains the direction arrow of the pupil position coordinate of the user, face location coordinate, pupil from the facial image
Amount etc., and the deep learning model 210 can be inputted.As described above, eye tracking unit 206 is by by the people of user
The various quantitative data for eye tracking of the image and rule-based acquisition of face and pupil input deep learning model
210, so that the accuracy of eye tracking can be improved.
In addition, in sight and screen of the eye tracking unit 206 based on the user detected the ad content position,
It can determine whether the user watches the ad content attentively.As described later, content providing unit 202 is by considering in the screen
The position of the ad content and the user watch the time of the ad content attentively, can be changed in the screen in the advertisement
The position of appearance.
Fig. 3 is the mistake for illustrating to track the sight of user in eye tracking unit 206 that one embodiment of the invention is related to
The figure of journey, Fig. 4 are the examples for the face vector that one embodiment of the invention is related to.In addition, Fig. 5 is that diagram one embodiment of the invention relates to
And by deep learning model 210 track user sight process example.
Referring to Fig. 3, eye tracking unit 206 is on the facial image of the user obtained by filming apparatus 204 using base
In the algorithm of rule, vector, pupil image and the pupil position coordinate of direction of face etc. for indicating the user can be obtained.It is logical
Often, due to when user watches attentively at specific, by the facial orientation point, therefore the direction of the face and the direction of visual lines of user
Consistent probability is very high.As a result, in the embodiment of the present invention, eye tracking unit 206 is by by the facial image of user, pupil
Image and indicate user face direction vector as deep learning model 210 input data and use, so as to
Improve the accuracy of eye tracking.Eye tracking unit 206 for example extracts facial image by predetermined feature extraction algorithm
Characteristic vector can be obtained the vector i.e. face vector (face- for indicating the direction of face of the user by the characteristic vector
vector).The example of the face vector obtained by this method is as shown in Figure 4.In addition, eye tracking unit 206 passes through figure
As processing technique detects eye areas from the facial image, and the image of the eye areas can be obtained (that is, pupillogram
Picture) and iris or pupil position coordinates.In addition, eye tracking unit 206 detects the face of user across the screen
Region, and the position coordinates of the human face region can be obtained.Eye tracking unit 206 can be by the vector so obtained, pupillogram
Picture/position coordinates, facial image/position coordinates etc. input deep learning model 210.
Referring to Fig. 5, deep learning model 210 can have the multiple layers being made of layered structure, can input institute in the layer
State input data.User can be traced based on the learning data learnt in advance and the input data newly inputted in deep learning model 210
Sight.
In addition, for the sight for making eye tracking unit 206 that deep learning model 210 be utilized more accurately to track user,
Need to improve the study data of deep learning model 210, the i.e. reliability for the learning data of eye tracking.
For this purpose, learning data collector unit 208 is based on watching attentively acting and collect for deep learning model referring again to Fig. 2
The learning data of the volume of 210 study.Specifically, learning data collector unit 208 is received from watching user terminal 102 attentively
When the movement of the setting of the person's of watching attentively input of the point set in screen, learning data is collected, the learning data is included in
Receive the person of watching attentively that time point of the input of the movement is shot by filming apparatus 204 facial image and the setting
Point location information.The movement is such as can be the person's of watching attentively touching screen, the person of watching attentively makes a sound, the learning data
The embodiment of collector unit is as follows.
<embodiment>
When the person of watching attentively touches to release screen locking screen and inputs pattern, the person of being look at touches the initial time point of input
Filming apparatus 204 drive and shoot the face of the person of watching attentively → by the facial image of the person of watching attentively of shooting (or the facial image/
Vector, pupil image/position coordinates of the person of watching attentively of the direction of the face of the person of watching attentively described in position coordinates, instruction etc.) and initially
The location information for touching the point of the pattern is collected as learning data.
When the application icon set in screen or menu key (are either clicked) in the person's of watching attentively touching, the person of being look at
The time point filming apparatus 204 of touching input drives and shoots the face of the person of watching attentively → by the facial image of the person of watching attentively of shooting
(or the vector of the direction of the face of the person of watching attentively described in the facial image/position coordinates, instruction, the person of watching attentively pupil image/
Position coordinates etc.) and initially touch the location information of point of the pattern and be collected as learning data.
In the case where showing a point on the screen and the person of watching attentively being guided to touch the point, when the person of watching attentively touches at described,
The time point filming apparatus 204 that the person of being look at touches input drives and shoots the face of the person of watching attentively → by the people of the person of watching attentively of shooting
Face image (or the pupil of the vector of the direction of the face of the person of watching attentively described in the facial image/position coordinates, instruction, the person of watching attentively
Hole image/position coordinates etc.) and initially touch the location information of point of the pattern and be collected as learning data.
As described above, the learning data collected can input deep learning model 210 and for its study.Specifically, sight with
Track unit 206 makes deep learning model 210 learn the learning data, by utilizing the deep learning for learning the learning data
The sight of the user can be traced in model 210.Learning data collector unit 208 is further elaborated with referring to Fig. 6 to Fig. 9
The method for collecting learning data.
Fig. 6 is for collecting in learning data collector unit 208 of illustrating that one embodiment of the invention is related to for depth
Spend the example of the process for the learning data that learning model 210 inputs.
Referring to Fig. 6, learning data collector unit 208 can show 9 points for inputting pattern on screen locking screen.By
This, the person of watching attentively can touch the predefined zigzag pattern of input to release screen locking screen.At this point, the person of watching attentively can be with starting point S
→ end point E is touched and is inputted zigzag pattern in direction.Learning data collector unit 208 collects learning data, the study number
Pass through filming apparatus 204 according to the time point for including the i.e. person's of watching attentively touching starting point S of initial time point that the person of being look at touches input
The facial image of the person of watching attentively of shooting and the location information of the starting point S.
Fig. 7 is for collecting in learning data collector unit 208 of illustrating that one embodiment of the invention is related to for depth
Another example of the process for the learning data that learning model 210 inputs is spent, Fig. 8 is for illustrating in Fig. 7 when the person of watching attentively touches
Change the example of the process of the visible elements of the point when point of setting.
Referring to Fig. 7, learning data collector unit 208 can on the screen display button A (Reverse keys), key B (press by advance
Key), key C (start key) and key D (end key) etc..If learning data is collected when the person of watching attentively touches key A
Unit 208 collects learning data, and the learning data includes i.e. not watching person's touching attentively at the initial time point of the person's of being look at touching input
The time point for touching key A passes through the facial image for the person of watching attentively that filming apparatus 204 is shot and the location information of the key A.
In addition, learning data collector unit 208 can change the vision of the point after the person of watching attentively touches the point
Element (visual elements) is so that the sight of the person of watching attentively remains in the point of the touching after the touching.
It here, visible elements are as element required when with the object exported on eye recognition screen, such as may include on screen
The object of output, size, shape, color, brightness, the texture of the outer wire in region or the object including the object
Deng.
Referring to Fig. 8, when the person of watching attentively touches key A, learning data collector unit 208 can show the color of key A
For deeper color, the sight that the person of watching attentively thus may be guided is remained in after the touching on key A.
In addition, learning data collector unit 208 can drive bat by touching the time point of the point of setting in the person of watching attentively
It takes the photograph device 204 and collects the learning data.That is, filming apparatus 204 remains off when flat, set in the person of watching attentively touching
The time point of fixed point can be driven filming apparatus 204 by learning data collector unit 208 and shoot user, thus can be prevented because clapping
It takes the photograph continuing working for device 204 and 102 battery consumption of user terminal is caused to increase.In addition, learning data collector unit 208 can incite somebody to action
The person of watching attentively touch the person of watching attentively of the time point shooting of the point facial image and the point location information (that is,
In the learning data that the time point for touching the point collects) it is transmitted to server 104, thus server 104 can receive it
Collection and analysis.Server 104 is collected the learning data from user terminal 102 and is stored in database (not shown), and
Analytic process that executable user terminal 102 executes (such as extract face vector, pupil image/position coordinates, facial image/
Position coordinates etc.).
In addition, learning data collector unit 208 in the state that filming apparatus 204 works, is set when the person's of watching attentively touching
When point, before and after the time point of the touching occurring and in the time set based on the time point that the touching occurs
Time point is (for example, based on the time point before the 1 second time point touched, after the 1 second time point touched
Time point) on can collect the learning data respectively.When specified point to be touched generally, due to the person of watching attentively, before touching and touch
The point can be watched after touching attentively, therefore not only actually occurring the time point of touching but also time before and after touching checks and accepts
The learning data of collection is equally also judged as having very high reliability.That is, according to an embodiment of the invention, by being filled in shooting
Set 204 it is in running order under when the person of watching attentively touches the point of setting at the time point that the touching occurs and based on institute occurs
The time point of the time interval set before and after the time point for stating touching collects learning data respectively, so as to easily
Collect the learning data with the volume of high reliability.
Fig. 9 is collecting in learning data collector unit 208 for depth of illustrating that one embodiment of the invention is related to
Practise another example of the process for the learning data that model 210 inputs.
Referring to Fig. 9, the sentence that learning data collector unit 208 is set in specified point display, when the person's of watching attentively sending sound
When sound is with the corresponding sentence, learning data collector unit 208 collects learning data, and the learning data includes in sounding
The facial image for the person of watching attentively that start time point is shot by filming apparatus 204 and the location information of the point.As an example, it learns
It practises data collection module 208 and distinguishes display statement " following word please be say " and " apple on the upper end of screen and center portion
Fruit ", for the person of watching attentively in order to read " apple " when making a sound, learning data collector unit 208 collects study number as a result,
According to, the learning data include sounding the facial image of the person of watching attentively that is shot by filming apparatus 204 of start time point and
The location information of the point of " apple " sentence described in display.
As described above, according to an embodiment of the invention, working as what reception was inputted by the person of watching attentively for watching the point set in screen attentively
It, will be in the facial image of the person of watching attentively for the time point shooting for receiving the action input and described when such as touching, sound movement
Point location information as be used for eye tracking deep learning model 210 learning data and use, so as to more mention
The accuracy and reliability of high eye tracking.Figure 10 is the tenderer based on sight being related to for illustrating one embodiment of the invention
The example of formula.The position of the ad content in the sight and screen of the user that eye tracking unit 206 detects by comparing,
Judge whether the user watches the ad content attentively, can determine whether user more watches attentively extensively on a certain position as a result,
Accuse content.Eye tracking unit 206 calculates time and the number that user watches ad content attentively according to different zones, and can be by it
It is provided to server 104.Server 104 with advertisement master terminal 106 by linking as a result, according to the region for placing ad content
Difference, which can be carried out different bids (bidding).
Referring to Fig.1 0, the ad content bid that server 104 watches more region attentively to user respectively is 1 dollar, to
The ad content bid that less region is watched at family attentively is 0.6 dollar, and is charged to advertisement master terminal 106.
In addition, content providing unit 202 is by considering the position of the ad content and user note in the screen
Depending on the time of the ad content, the position of the ad content in the screen can be changed.For example, content providing unit 202
The watching area of the number or time for watching setting attentively or more is found out in the multiple regions of presenting advertising content, and can be by mesh
The position of the ad content of preceding display changes to the watching area watched attentively more than the number of the setting or more than the time.By
This, bootable user more watches the ad content attentively mostly.
Figure 11 is the flow chart for illustrating sight tracing that one embodiment of the invention is related to.The flow chart of diagram
In, though the sequence for the step of multiple steps of the method point are recorded, can change at least part and execution, or combine
Other steps simultaneously execute together, perhaps omit and are perhaps divided into detailed step execution or increase more than one step (not shown)
And it executes.
In S102 step, the presenting advertising content on the screen of content providing unit 202.
In S104 step, eye tracking unit 206 obtains the facial image of user by filming apparatus 204.
In S106 step, eye tracking unit 206 using setting rule-based algorithm and deep learning model with
The sight of track user.Front using rule-based algorithm and deep learning model and tracks use to eye tracking unit 206
The method of the sight at family is described in detail, therefore description is omitted herein.
In S108 step, in sight of the eye tracking unit 206 based on the user detected and the screen in advertisement
The position of appearance, and judge whether the user watches the ad content attentively.
In S110 step, when judging that the user watches the ad content attentively, eye tracking unit 206 grasps screen
The position of interior ad content, the person of watching attentively watch time/number of the ad content etc. attentively.
Figure 12 is intended to illustrate the side of the calculating environment 10 of the computing device including being applicable in the embodiment enumerated
Block diagram.In the embodiment of diagram, each component can have the function of the different and ability other than following description, in addition to it is following description with
Outside, it may also include other components.
The calculating environment 10 of diagram includes computing device 12.In one embodiment, computing device 12 be can be in ad system
100 or user terminal 102 in include more than one component.
Computing device 12 includes at least one processor 14, computer-readable storage media 16 and communication bus 18.Processor
14 make computing device 12 be worked according to mentioned-above embodiment illustrated.It is deposited for example, 14 executable computer of processor is readable
More than one program stored in storage media 16.One procedure above may include more than one computer executable command, institute
State when computer executable command is executed based on processor 14 can so that computing device 12 execute embodiment illustrated be related to it is dynamic
The form of work is constituted.
The structure of computer-readable storage media 16 can store computer executable command and program code, program data and/
Or the information of other convenient forms.The program 20 stored in computer-readable storage media 16 includes that can be performed based on processor 14
The set of order.In one embodiment, computer-readable storage media 16 can be the memory (volatibility of such as random access memory
Memory, nonvolatile memory or its combination appropriate), more than one Disk Storage Device, disc storage equipment, dodge
The storage media of equipment, the other forms that furthermore can access and can store required information based on computing device 12 is deposited, or
Combination appropriate.
Communication bus 18 is connected with each other its of the computing device 12 including processor 14, computer-readable storage media 16
His various assemblies.
Computing device 12 may also include for providing more than one import and export of interface to more than one I/O device 24
Interface 22 and more than one network communication interface 26.Output/input interface 22 may include above-mentioned rolling screen 102, input interface
104, entr screen 105 etc..Output/input interface 22 and network communication interface 26 are connect with communication bus 18.I/O device 24 is logical
Crossing output/input interface 22 can connect with the other assemblies of computing device 12.The I/O device 24 of illustration may include such as fixed-point apparatus
(mouse or Trackpad etc.), keyboard, hand touch input unit (touch tablet or touch screen etc.), sound or sound input dress
It sets, various types of sensor devices and/or the input unit such as filming apparatus, and/or such as display device, printer, loudspeaking
The output device of device and/or network interface card.The I/O device 24 of illustration may include counting as the component for constituting computing device 12
The inside for calculating device 12, can also be used as the self-contained unit separated with computing device 12, connect with computing device 102.
Above by representative embodiment, invention is explained in detail, but for skill belonging to the present invention
Art field have usual knowledge technical staff for it is understood that can be to front institute in the range of without departing from scope
The embodiment stated carries out various modifications.Therefore, scope of authority of the invention should not be so limited to the embodiment of explanation, but should be based on
It aftermentioned claims and is determined with content that the claims are equal.
Claims (20)
1. a kind of user terminal, wherein include:
Filming apparatus is used to shoot the facial image of user;And
Eye tracking unit, the rule based on setting obtain the direction for indicating the face of the user from the facial image
Vector and the user pupil image, pass through facial image described in the deep learning mode input to setting, the vector
And the pupil image tracks the sight of the user.
2. user terminal as described in claim 1, wherein
It further include learning data collector unit, when receiving the movement of setting from the person of watching attentively for watching the point set in screen attentively,
The learning data collector unit collects learning data, and the learning data includes shooting at the time point for receiving the movement
The location information of the facial image of the person of watching attentively and the point of the setting,
The eye tracking unit makes learning data described in the deep learning model learning, after learning the learning data
The deep learning model following described in user sight.
3. user terminal as claimed in claim 2, wherein
When the person of watching attentively touches at described, the learning data collector unit collects institute at the time point that the touching occurs
State learning data.
4. user terminal as claimed in claim 3, wherein
The learning data collector unit drives the filming apparatus at the time point that the person of watching attentively touches the point and collects
The learning data.
5. user terminal as claimed in claim 3, wherein
The learning data collector unit sends out the learning data that the time point that the point is touched in the person of watching attentively collects
It is sent to server.
6. user terminal as claimed in claim 3, wherein
The learning data collector unit is in the state that the filming apparatus is run, when the person of watching attentively touches at described,
In the time at time point and the time interval set before and after the time point of the touching occurs that the touching occurs
Point collects the learning data respectively.
7. user terminal as claimed in claim 3, wherein
The learning data collector unit changes the visible elements of the point so that institute after the person of watching attentively touches the point
The sight for stating the person of watching attentively is remained in after the touching on the point.
8. user terminal as claimed in claim 2, wherein
The learning data collector unit shows the sentence of setting on the point, and when the person of watching attentively makes a sound, In
The start time point of sounding collects the learning data.
9. user terminal as described in claim 1, wherein
The eye tracking unit based on the rule obtain from the facial image user pupil position coordinate and
Face location coordinate, and together with the vector for the facial orientation for indicating the user, also by the pupil position coordinate and institute
It states face location coordinate and inputs the deep learning model.
10. user terminal as described in claim 1, wherein
It further include the content providing unit for presenting advertising content on the screen,
The position of the ad content in sight and the screen of the eye tracking unit based on the user detected,
Judge whether the user watches the ad content attentively,
The content providing unit is by considering that the position of the ad content and the user watch attentively described wide in the screen
The time of content is accused, to change the position of the ad content in the screen.
11. a kind of sight tracing, wherein include:
In filming apparatus, the step of shooting the facial image of user;
In eye tracking unit, the rule based on setting obtains the facial orientation for indicating the user from the facial image
Vector and the user pupil image the step of;And
In the eye tracking unit, by facial image described in the deep learning mode input to setting, the vector and
The step of sight of the pupil image to track the user.
12. sight tracing as claimed in claim 11, wherein further include:
In learning data collector unit, when receiving the movement of setting from the person of watching attentively for watching the point set in screen attentively, receive
Collection includes the position of the facial image of the person of watching attentively described in the time point shooting for receiving the movement and the point of the setting
The step of learning data of information;And
In the eye tracking unit, the step of making learning data described in the deep learning model learning,
The step of sight of the tracking user, is used using described in the deep learning model following for learning the learning data
The sight at family.
13. sight tracing as claimed in claim 12, wherein
Described the step of collecting learning data, collects when the person of watching attentively touches at described at the time point that the touching occurs
The learning data.
14. sight tracing as claimed in claim 13, wherein
Described the step of collecting learning data, drives the filming apparatus at the time point that the person of watching attentively touches the point and receives
Collect the learning data.
15. sight tracing as claimed in claim 13, wherein
It further include that will be touched in the person of watching attentively described in the time point collection of the point in the learning data collector unit
Learning data is sent to the step of server.
16. sight tracing as claimed in claim 13, wherein
The step of collection learning data in the state that the filming apparatus is run, when the person of watching attentively touches the point
When, in the time at time point and the time interval set before and after the touching time point occurs that the touching occurs
Point collects the learning data respectively.
17. sight tracing as claimed in claim 13 further includes in the learning data collector unit in the note
Change the visible elements of the point so that the sight of the person of watching attentively is still stopped after the touching after touching the point depending on person
The step of staying at described.
18. sight tracing as claimed in claim 12, wherein
Described the step of collecting learning data, shows the sentence of setting on the point, and when the person of watching attentively makes a sound,
The learning data is collected in the start time point of sounding.
19. sight tracing as claimed in claim 11, wherein
It further include the pupil for obtaining the user from the facial image based on the rule in the eye tracking unit
The step of position coordinates and face location coordinate,
The step of sight of the tracking user together with the facial orientation for indicating the user vector, also by the pupil
Position coordinates and the face location coordinate input the deep learning model.
20. sight tracing as claimed in claim 11, wherein further include:
In content providing unit on the screen presenting advertising content the step of;
In the eye tracking unit, the ad content in sight and the screen based on the user detected
Position judges the step of whether user watches the ad content attentively;And
In the content providing unit, by considering that the position of the ad content and the user watch institute attentively in the screen
The time of ad content is stated, the step of to change the position of the ad content in the screen.
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2017-0117059 | 2017-09-13 | ||
KR20170117059 | 2017-09-13 | ||
KR1020170167334A KR102092931B1 (en) | 2017-09-13 | 2017-12-07 | Method for eye-tracking and user terminal for executing the same |
KR10-2017-0167334 | 2017-12-07 | ||
PCT/KR2018/004562 WO2019054598A1 (en) | 2017-09-13 | 2018-04-19 | Eye tracking method and user terminal for performing same |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110476141A true CN110476141A (en) | 2019-11-19 |
Family
ID=66037021
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201880023479.7A Pending CN110476141A (en) | 2017-09-13 | 2018-04-19 | Sight tracing and user terminal for executing this method |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102092931B1 (en) |
CN (1) | CN110476141A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113283402A (en) * | 2021-07-21 | 2021-08-20 | 北京科技大学 | Differential two-dimensional fixation point detection method and device |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110058693B (en) * | 2019-04-23 | 2022-07-15 | 北京七鑫易维科技有限公司 | Data acquisition method and device, electronic equipment and storage medium |
KR20210009066A (en) | 2019-07-16 | 2021-01-26 | 삼성전자주식회사 | Method and apparatus of estimating intention of user |
KR102158432B1 (en) * | 2019-12-06 | 2020-09-24 | (주)미디어스퀘어 | System and method for redeploying ui components based on user's interaction on user interface |
KR102498209B1 (en) * | 2020-09-10 | 2023-02-09 | 상명대학교산학협력단 | Method, server and program for judge whether an advertisement is actually watched through iris recognition |
KR102319328B1 (en) * | 2020-11-12 | 2021-10-29 | 이종우 | Method of Evaluating Learning Attitudes Using Video Images of Non-face-to-face Learners, and Management Server Used Therein |
KR102336574B1 (en) * | 2020-12-04 | 2021-12-07 | (주)매트리오즈 | Learning Instruction Method Using Video Images of Non-face-to-face Learners, and Management Server Used Therein |
KR102508937B1 (en) * | 2021-05-17 | 2023-03-10 | 삼육대학교산학협력단 | System for management of hazardous gas |
KR102426071B1 (en) * | 2021-07-21 | 2022-07-27 | 정희석 | Artificial intelligence based gaze recognition device and method, and unmanned information terminal using the same |
KR102586828B1 (en) * | 2022-12-16 | 2023-10-10 | 한국전자기술연구원 | Advanced image display method based on face, eye shape, and HMD wearing position |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20000056563A (en) * | 1999-02-23 | 2000-09-15 | 김재희 | gaze detection system |
US20070230797A1 (en) * | 2006-03-30 | 2007-10-04 | Fujifilm Corporation | Method, apparatus, and program for detecting sightlines |
US20140096077A1 (en) * | 2012-09-28 | 2014-04-03 | Michal Jacob | System and method for inferring user intent based on eye movement during observation of a display screen |
KR20150038141A (en) * | 2012-07-20 | 2015-04-08 | 페이스북, 인크. | Adjusting mobile device state based on user intentions and/or identity |
JP2015232771A (en) * | 2014-06-09 | 2015-12-24 | 国立大学法人静岡大学 | Face detection method, face detection system and face detection program |
US20160109945A1 (en) * | 2013-05-30 | 2016-04-21 | Umoove Services Ltd. | Smooth pursuit gaze tracking |
KR20160072015A (en) * | 2014-12-12 | 2016-06-22 | 삼성전자주식회사 | Device and method for arranging contents displayed on the screen |
WO2016208261A1 (en) * | 2015-06-26 | 2016-12-29 | ソニー株式会社 | Information processing device, information processing method, and program |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101479471B1 (en) | 2012-09-24 | 2015-01-13 | 네이버 주식회사 | Method and system for providing advertisement based on user sight |
-
2017
- 2017-12-07 KR KR1020170167334A patent/KR102092931B1/en active IP Right Grant
-
2018
- 2018-04-19 CN CN201880023479.7A patent/CN110476141A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20000056563A (en) * | 1999-02-23 | 2000-09-15 | 김재희 | gaze detection system |
US20070230797A1 (en) * | 2006-03-30 | 2007-10-04 | Fujifilm Corporation | Method, apparatus, and program for detecting sightlines |
KR20150038141A (en) * | 2012-07-20 | 2015-04-08 | 페이스북, 인크. | Adjusting mobile device state based on user intentions and/or identity |
CN104641657A (en) * | 2012-07-20 | 2015-05-20 | 脸谱公司 | Adjusting mobile device state based on user intentions and/or identity |
US20140096077A1 (en) * | 2012-09-28 | 2014-04-03 | Michal Jacob | System and method for inferring user intent based on eye movement during observation of a display screen |
US20160109945A1 (en) * | 2013-05-30 | 2016-04-21 | Umoove Services Ltd. | Smooth pursuit gaze tracking |
JP2015232771A (en) * | 2014-06-09 | 2015-12-24 | 国立大学法人静岡大学 | Face detection method, face detection system and face detection program |
KR20160072015A (en) * | 2014-12-12 | 2016-06-22 | 삼성전자주식회사 | Device and method for arranging contents displayed on the screen |
WO2016208261A1 (en) * | 2015-06-26 | 2016-12-29 | ソニー株式会社 | Information processing device, information processing method, and program |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113283402A (en) * | 2021-07-21 | 2021-08-20 | 北京科技大学 | Differential two-dimensional fixation point detection method and device |
Also Published As
Publication number | Publication date |
---|---|
KR102092931B1 (en) | 2020-03-24 |
KR20190030140A (en) | 2019-03-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110476141A (en) | Sight tracing and user terminal for executing this method | |
CN109359548B (en) | Multi-face recognition monitoring method and device, electronic equipment and storage medium | |
US9471912B2 (en) | Behavioral event measurement system and related method | |
CN101853390B (en) | Method and apparatus for media viewer health care | |
Porzi et al. | A smart watch-based gesture recognition system for assisting people with visual impairments | |
CN107818180A (en) | Video correlating method, image display method, device and storage medium | |
CN110476180A (en) | User terminal for providing the method for the reward type advertising service based on text reading and for carrying out this method | |
CN105184246A (en) | Living body detection method and living body detection system | |
CN111240482B (en) | Special effect display method and device | |
JP2016122272A (en) | Availability calculation system, availability calculation method and availability calculation program | |
CN103530788A (en) | Multimedia evaluating system, multimedia evaluating device and multimedia evaluating method | |
CN111311554A (en) | Method, device and equipment for determining content quality of image-text content and storage medium | |
CN104408402A (en) | Face identification method and apparatus | |
CN112101123A (en) | Attention detection method and device | |
WO2020103293A1 (en) | Method, device, and electronic device for presenting individual search information | |
Khowaja et al. | Facial expression recognition using two-tier classification and its application to smart home automation system | |
CN113076903A (en) | Target behavior detection method and system, computer equipment and machine readable medium | |
CN102783174B (en) | Image processing equipment, content delivery system, image processing method and program | |
CN112163095A (en) | Data processing method, device, equipment and storage medium | |
JP2021026744A (en) | Information processing device, image recognition method, and learning model generation method | |
CN113766297B (en) | Video processing method, playing terminal and computer readable storage medium | |
Chen et al. | Blinking: Toward wearable computing that understands your current task | |
CN111639705B (en) | Batch picture marking method, system, machine readable medium and equipment | |
KR102568875B1 (en) | Server for providing service for recommending mentor and lecture and method for operation thereof | |
US11250242B2 (en) | Eye tracking method and user terminal performing same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20191119 |
|
WD01 | Invention patent application deemed withdrawn after publication |