CN108491823A - Method and apparatus for generating eye recognition model - Google Patents

Method and apparatus for generating eye recognition model Download PDF

Info

Publication number
CN108491823A
CN108491823A CN201810286481.5A CN201810286481A CN108491823A CN 108491823 A CN108491823 A CN 108491823A CN 201810286481 A CN201810286481 A CN 201810286481A CN 108491823 A CN108491823 A CN 108491823A
Authority
CN
China
Prior art keywords
eye
human eye
sample
eye image
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810286481.5A
Other languages
Chinese (zh)
Other versions
CN108491823B (en
Inventor
陈艳琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810286481.5A priority Critical patent/CN108491823B/en
Publication of CN108491823A publication Critical patent/CN108491823A/en
Application granted granted Critical
Publication of CN108491823B publication Critical patent/CN108491823B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns

Abstract

The embodiment of the present application discloses the method and apparatus for generating eye recognition model.One specific implementation mode of this method includes:Obtain training sample set, wherein, training sample includes sample eye image, and mark human eye directional information corresponding with sample eye image and mark human eye discriminant information, mark human eye directional information be used to indicate sample eye image characterization human eye direction of gaze, mark human eye discriminant information be used to indicate sample eye image characterization human eye whether fixation object position;Using machine learning method, using the sample eye image of the training sample in training sample set as input, using the corresponding mark human eye directional information of the sample eye image of input and mark human eye discriminant information as output, training obtains eye recognition model.Improve the flexibility for generating the eye recognition model for human eye detection.

Description

Method and apparatus for generating eye recognition model
Technical field
The invention relates to field of computer technology, and in particular to the method for generating eye recognition model and dress It sets.
Background technology
Human eye detection has a wide range of applications in computer vision field, such as Expression Recognition, Eye-controlling focus, recognition of face Deng.In practice, human eye detection technology, which can be applied to, is unlocked the equipment such as mobile phone, tablet computer, is also applied to and sentences Disconnected driver whether the fields such as fatigue driving.
About the method for human eye detection, it is broadly divided into two classes.One kind is red using one based on the special red-eye effect of human eye Eye pattern picture and another non-dark pupil image of blood-shot eye illness are compared, further to carry out human eye detection and tracking.It is another kind of to be based on coloured silk The method of color or gray level image, use has the recognizers such as template matches, projection, neural network.
Invention content
The embodiment of the present application proposes the method and apparatus for generating eye recognition model.
In a first aspect, the embodiment of the present application provides a kind of method for generating eye recognition model, this method includes: Obtain training sample set, wherein training sample includes sample eye image, and mark people corresponding with sample eye image Eye directional information and mark human eye discriminant information, mark human eye directional information are used to indicate the human eye of sample eye image characterization Direction of gaze, mark human eye discriminant information be used to indicate sample eye image characterization human eye whether fixation object position;It utilizes Machine learning method, using the sample eye image of the training sample in training sample set as input, by the sample people of input Eye pattern is used as output, training to obtain eye recognition model as corresponding mark human eye directional information and mark human eye discriminant information.
In some embodiments, using machine learning method, by the sample of each training sample in training sample set Eye image is as input, by the corresponding mark human eye directional information of the sample eye image of input and mark human eye discriminant information As output, training obtains eye recognition model, including:Extract initial neural network;Execute following training step:It will training sample At least one of this set sample eye image is input to initial neural network, obtains at least one sample eye image The each corresponding human eye directional information of sample eye image and human eye discriminant information;It will be every at least one sample eye image The corresponding human eye directional information of a sample eye image and human eye discriminant information respectively with corresponding mark human eye directional information and Mark human eye discriminant information is compared, and determines whether initial neural network reaches preset optimization aim according to comparison result; Reach optimization aim in response to the initial neural network of determination, initial neural network is determined as to the eye recognition mould of training completion Type.
In some embodiments, using machine learning method, by the sample of each training sample in training sample set Eye image is as input, by the corresponding mark human eye directional information of the sample eye image of input and mark human eye discriminant information As output, training obtains eye recognition model, further includes:It is not up to optimization aim in response to the initial neural network of determination, is adjusted The network parameter of whole initial neural network, and training sample set is formed using unworn training sample, continue to execute instruction Practice step.
In some embodiments, by the corresponding human eye side of each sample eye image at least one sample eye image It is compared respectively with corresponding mark human eye directional information and mark human eye discriminant information to information and human eye discriminant information, root Determine whether initial neural network reaches preset optimization aim according to comparison result, including:Based on preset first-loss function With the second loss function, the functional value of the functional value and the second loss function of first-loss function is determined, wherein first-loss letter Several functional values is for characterizing the corresponding human eye directional information of sample eye image for inputting initial neural network and corresponding mark The difference degree between human eye directional information is noted, the functional value of the second loss function is for characterizing the sample for inputting initial neural network Difference degree between the corresponding human eye discriminant information of this eye image and mark human eye discriminant information;In response to determining the first damage The functional value for losing function is less than or equal to preset first threshold, and the functional value of the second loss function is less than or equal to preset second Threshold value determines that initial neural network reaches optimization aim.
Second aspect, the embodiment of the present application provide a kind of method of human eye for identification, and this method includes:It obtains and waits knowing Other eye image;By eye image to be identified input eye recognition model trained in advance, human eye directional information and human eye are obtained Discriminant information, wherein eye recognition model is generated according to any realization method in such as first aspect, and human eye directional information is used In the direction of gaze for the human eye for indicating eye image characterization to be identified, human eye discriminant information is used to indicate eye image table to be identified The human eye of sign whether fixation object position.
The third aspect, the embodiment of the present application provide a kind of device for generating eye recognition model, which includes: Acquiring unit, be configured to obtain training sample set, wherein training sample includes sample eye image, and with sample people Eye pattern is used to indicate sample people as corresponding mark human eye directional information and mark human eye discriminant information, mark human eye directional information Eye characterization image human eye direction of gaze, mark human eye discriminant information be used to indicate sample eye image characterization human eye whether Fixation object position;Training unit is configured to utilize machine learning method, by the sample of the training sample in training sample set This eye image differentiates letter as input, by the corresponding mark human eye directional information of the sample eye image of input and mark human eye Breath obtains eye recognition model as output, training.
In some embodiments, training unit includes:Extraction module is configured to extract initial neural network;Training mould Block is configured to carry out following training step:At least one of training sample set sample eye image is input to initially Neural network obtains the corresponding human eye directional information of each sample eye image and the human eye at least one sample eye image Discriminant information;By at least one sample eye image the corresponding human eye directional information of each sample eye image and human eye sentence Other information is compared with corresponding mark human eye directional information and mark human eye discriminant information respectively, is determined according to comparison result Whether initial neural network reaches preset optimization aim;Reach optimization aim in response to the initial neural network of determination, it will be initial Neural network is determined as the eye recognition model of training completion.
In some embodiments, training unit is further configured to:It is not up to excellent in response to the initial neural network of determination Change target, adjusts the network parameter of initial neural network, and training sample set is formed using unworn training sample, after It is continuous to execute training step.
In some embodiments, training module includes:First determination sub-module is configured to be based on preset first-loss Function and the second loss function determine the functional value of the functional value and the second loss function of first-loss function, wherein the first damage Lose function functional value be used to characterize input the corresponding human eye directional information of sample eye image of initial neural network with it is corresponding Mark human eye directional information between difference degree, the functional value of the second loss function inputs initial neural network for characterizing The corresponding human eye discriminant information of sample eye image and mark human eye discriminant information between difference degree;Second determines submodule Block is configured to be less than or equal to preset first threshold, and the second loss letter in response to the functional value for determining first-loss function Several functional values is less than or equal to preset second threshold, determines that initial neural network reaches optimization aim.
Fourth aspect, the embodiment of the present application provide a kind of device of human eye for identification, which includes:It obtains single Member is configured to obtain eye image to be identified;Recognition unit is configured to eye image to be identified input training in advance Eye recognition model obtains human eye directional information and human eye discriminant information, wherein eye recognition model is according to such as first aspect In any realization method generate, human eye directional information is used to indicate the direction of gaze of the human eye of eye image characterization to be identified, Human eye discriminant information be used to indicate the human eye of eye image to be identified characterization whether fixation object position.
5th aspect, the embodiment of the present application provide a kind of electronic equipment, which includes:One or more processing Device;Storage device, for storing one or more programs;When one or more programs are executed by one or more processors, make Obtain method of the one or more processors realization as described in any realization method in first aspect and second aspect.
6th aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program, should The method as described in any realization method in first aspect and second aspect is realized when computer program is executed by processor.
Method and apparatus provided by the embodiments of the present application for generating eye recognition model include sample people by obtaining The training sample of eye pattern picture and mark human eye directional information corresponding with sample eye image and mark human eye discriminant information Set, and then machine learning method is utilized, it, will using the sample eye image of the training sample in training sample set as input As output, training obtains people for the corresponding mark human eye directional information of sample eye image of input and mark human eye discriminant information Eye identification model, to improve the flexibility for generating the eye recognition model for human eye detection.
Description of the drawings
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart according to one embodiment of the method for generating eye recognition model of the application;
Fig. 3 A are the position relationships according to the human eye and target of the method for generating eye recognition model of the application Illustrative diagram;
Fig. 3 B are believed according to the determining mark human eye direction that is used for of the method for generating eye recognition model of the application The illustrative diagram of the target of the carry out region division of breath;
Fig. 3 C are to differentiate letter according to the determining mark human eye that is used for of the method for generating eye recognition model of the application The illustrative diagram of position of the camera of breath on target;
Fig. 4 is the neural network based on multi-task learning according to the method for generating eye recognition model of the application Illustrative diagram;
Fig. 5 is the schematic diagram according to an application scenarios of the method for generating eye recognition model of the application;
Fig. 6 is the flow chart according to one embodiment of the method for the human eye for identification of the application;
Fig. 7 is the structural schematic diagram according to one embodiment of the device for generating eye recognition model of the application;
Fig. 8 is the flow chart according to one embodiment of the device of the human eye for identification of the application;
Fig. 9 is adapted for the structural schematic diagram of the computer system of the electronic equipment for realizing the embodiment of the present application.
Specific implementation mode
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, is illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the embodiment of the present application for generating the method for eye recognition model or for generating people The exemplary system architecture 100 of the device of eye identification model.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105. Network 104 between terminal device 101,102,103 and server 105 provide communication link medium.Network 104 can be with Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be interacted by network 104 with server 105 with using terminal equipment 101,102,103, to receive or send out Send message etc..Various telecommunication customer end applications can be installed, such as data processing class is answered on terminal device 101,102,103 With the application of, image processing class, payment class etc..
Terminal device 101,102,103 can be hardware, can also be software.When terminal device 101,102,103 is hard Can be the various electronic equipments with display screen when part, can also be provided with camera or with camera with it is wireless, The various electronic equipments of the connection types communication connection such as wired, including but not limited to smart mobile phone, tablet computer, e-book reading (Moving Picture Experts Group Audio Layer III, dynamic image expert compress mark for device, MP3 player Quasi- audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert compression Standard audio level 4) player, pocket computer on knee and desktop computer etc..When terminal device 101,102,103 is When software, it may be mounted in above-mentioned cited electronic equipment.Its may be implemented into multiple softwares or software module (such as with To provide the software or software module of Distributed Services), single software or software module can also be implemented as.It does not do herein specific It limits.
Server 105 can be to provide the server of various services, such as to various on terminal device 101,102,103 Using the background server for providing support.Background server can utilize the training sample set obtained to be trained, and will instruction Practice result (such as eye recognition model) and feed back to terminal device, or training result is stored into background server.
It should be noted that server 105 can be hardware, can also be software.When server is hardware, Ke Yishi The distributed server cluster of ready-made multiple server compositions, can also be implemented as individual server.When server is software, Multiple softwares or software module (such as providing the software or software module of Distributed Services) may be implemented into, it can also be real Ready-made single software or software module.It is not specifically limited herein.
It should be noted that the method for generating eye recognition model that the embodiment of the present application is provided can be by servicing Device 105 executes, and can also be executed by terminal device 101,102,103.It correspondingly, can for generating the device of eye recognition model To be set in server 105, can also be set in terminal device 101,102,103.
It should be understood that the number of the terminal device, network and server in Fig. 1 is only schematical.According to realization need It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, one embodiment of the method for generating eye recognition model according to the application is shown Flow 200.The method for being used to generate eye recognition model, includes the following steps:
Step 201, training sample set is obtained.
In the present embodiment, executive agent (such as the server shown in FIG. 1 of the method for generating eye recognition model Or terminal device) can be by wired connection mode or radio connection from long-range or from local obtain training sample set It closes.Wherein, training sample may include sample eye image, and mark human eye directional information corresponding with sample eye image With mark human eye discriminant information.
In the present embodiment, mark human eye directional information can serve to indicate that watching attentively for the human eye of sample eye image characterization Direction.Direction of gaze can include but is not limited to following at least one:Upwards, downwards, to the left, to the right, forward, invalid direction (such as eye closing).As an example, as shown in Figure 3A, 301 is using camera collecting sample eye image are used circles Target, between horizontal plane where pupil center's point of human eye 302 is identical as the horizontal plane where the center of circle of target 301 or both Vertical range be less than or equal to pre-determined distance.As shown in Figure 3B, target 301 be divided into pre-set dimension region 3011,3012, 3013、3014、3015.The camera of shooting eye image is set to predeterminated position.It is clapped when human eye 302 is watched attentively in region 3011 The mark human eye directional information for the eye image taken the photograph could be provided as "upper".It is shot when human eye 302 is watched attentively in region 3012 The mark human eye directional information of eye image could be provided as "lower".The human eye shot when human eye 302 is watched attentively in region 3013 The mark human eye directional information of image could be provided as on " left side ".The eye image shot when human eye 302 is watched attentively in region 3014 Mark human eye directional information could be provided as on " right side ".The mark of the eye image shot when human eye 302 is watched attentively in region 3015 Note human eye directional information could be provided as " preceding ".When human eye 302 it is improper watch the state such as (such as close one's eyes, narrow eye) attentively when shoot The mark human eye directional information of eye image could be provided as engineering noise.
In the present embodiment, mark human eye discriminant information can serve to indicate that whether the human eye of sample eye image characterization is noted Depending on target location.Wherein, target location can be shoot eye image camera where position or other have technology people The specified position (such as a certain area of space) of member.As shown in Figure 3 C, camera 303 is set to the predeterminated position on target 301, When human eye 301 watches the placement location (or using placement location as the center of circle, pre-set radius border circular areas) in camera 303 attentively When, the mark human eye discriminant information of the eye image of shooting could be provided as " watching attentively ".When human eye 301 is watched attentively in camera 303 Placement location other than region when, mark human eye discriminant information could be provided as " non-to watch attentively ".It should be appreciated that camera 303 It can be arranged in multiple positions, technical staff can be in advance to the eye image of the camera shooting under each position into rower Note.
It should be noted that sample eye image can be various types of images, such as coloured image, infrared image, Gray level image etc..
Step 202, using machine learning method, using the sample eye image of the training sample in training sample set as Input, using the corresponding mark human eye directional information of the sample eye image of input and mark human eye discriminant information as output, instruction Get eye recognition model.
In the present embodiment, based on the training sample set obtained in step 201, above-mentioned executive agent can utilize machine Learning method, using the sample eye image of the training sample in training sample set as input, by the sample human eye figure of input As corresponding mark human eye directional information and mark human eye discriminant information are used as output, training to obtain eye recognition model.Wherein, Above-mentioned eye recognition model can be technical staff be based on existing artificial neural network (such as convolutional neural networks, cycle god Through network etc.) carry out eye image identification model obtained from training with supervising.Also, above-mentioned artificial neural network can have There are existing various neural network structures (such as DenseBox, VGGNet, ResNet, SegNet etc.).
In some optional realization methods of the present embodiment, above-mentioned executive agent can be trained and be obtained in accordance with the following steps Eye recognition model:
First, initial neural network is extracted.Wherein, initial neural network can be various types of indisciplines or not instruct Practice the artificial neural network completed.Each layer of initial neural network can be provided with initial parameter, instruction of the parameter in neural network It can constantly be adjusted during white silk.As an example, initial neural network can be unbred convolutional neural networks (example Such as may include convolutional layer, pond layer, full articulamentum).
Then, following training step is executed:The first step, at least one of training sample set sample eye image is defeated Enter to initial neural network, obtains the corresponding human eye direction letter of each sample eye image at least one sample eye image Breath and human eye discriminant information.Second step, by the corresponding human eye of each sample eye image at least one sample eye image Directional information and human eye discriminant information are compared with corresponding mark human eye directional information and mark human eye discriminant information respectively, Determine whether initial neural network reaches preset optimization aim according to comparison result.Wherein, preset optimization aim can be The recognition accuracy of initial neural network (such as is tested to obtain using preset test sample set to initial neural network Recognition accuracy) reach predetermined threshold value.Third walks, and reaches optimization aim in response to the initial neural network of determination, will initial god It is determined as the eye recognition model of training completion through network.
In some optional realization methods of the present embodiment, train the step of obtaining eye recognition model that can also include Following steps:
It is not up to optimization aim in response to the initial neural network of determination, adjusts the network parameter of initial neural network, and Training sample set is formed using unworn training sample, continues to execute training step.As an example, reversed biography may be used Broadcast algorithm (Back Propagation Algorithm, BP algorithm) and gradient descent method (such as small lot gradient descent algorithm) The network parameter of above-mentioned initial neural network is adjusted.It should be noted that back-propagation algorithm and gradient descent method are The known technology studied and applied extensively at present, details are not described herein.
In some optional realization methods of the present embodiment, above-mentioned executive agent can determine initially in accordance with the following steps Whether neural network reaches preset optimization aim:
First, be based on preset first-loss function and the second loss function, determine first-loss function functional value and The functional value of second loss function.Wherein, the functional value of first-loss function is for characterizing the sample for inputting initial neural network Difference degree between the corresponding human eye directional information of eye image and corresponding mark human eye directional information.Second loss function Functional value be used for characterize input initial neural network the corresponding human eye discriminant information of sample eye image and mark human eye sentence Difference degree between other information.As an example, above-mentioned first-loss function and the second loss function can be various types of Loss function, such as softmax loss functions, sigmoid loss functions etc..Above-mentioned first-loss function and the second loss function Type may be the same or different.
In response to determining that the functional value of first-loss function is less than or equal to preset first threshold, and second loss function Functional value is less than or equal to preset second threshold, determines that initial neural network reaches optimization aim.In general, loss function can be used Difference between the prediction result and legitimate reading of characterization neural network.The functional value of loss function is smaller, then neural network Prediction result and legitimate reading between difference it is smaller.
Optionally, above-mentioned executive agent can based on the machine learning method of multi-task learning, to initial neural network into Row training, obtains human eye detection model.Wherein, multi-task learning is a kind of conclusion moving method, he can allow neural network Each layer in the feature for being exclusively used in some task used by other tasks.Multi-task learning can obtain being applicable to it is several not With the feature of task, since the feature of different task may be interrelated, multi-task learning can improve what training obtained The accuracy of identification of model.In practice, multi-task learning can have diversified forms, as combination learning (Joint Learning), Autonomous learning (Learning to Learn) learns (Learning with Auxiliary Tasks) by nonproductive task Deng.As an example, as shown in figure 4, the task that initial neural network 401 executes includes two:People of the task 1 for determining input Eye pattern is as 402 human eye directional information, the human eye discriminant information of eye image of the task 2 for determining input.Wherein, task 1 It can be in each layer (in such as Fig. 4 with the characteristic (characteristic that each circle in such as Fig. 4 represents) used in task 2 (line between each circle in such as Fig. 4 represents characteristic and shares between the layers) is shared between 4011-4013).This Sample, the eye recognition model that training obtains can further increase the accuracy of human eye detection.It should be noted that above-mentioned more The method of business study is the known technology of extensive research and application at present, and details are not described herein.
It is the application scenarios according to the method for generating eye recognition model of the present embodiment with continued reference to Fig. 5, Fig. 5 One schematic diagram.In the application scenarios of Fig. 5, computer 502 obtains training sample from another computer 501 for communicating with connection This set 503.Wherein, training sample includes sample eye image 5031, and mark human eye corresponding with sample eye image Directional information 5032 and mark human eye discriminant information 5033.Then, computer 502 extracts (such as the convolutional Neural of initial model 504 Network), successively by sample eye image input initial model 504, by with the sample eye image of input and with sample human eye figure It is exported as corresponding mark human eye directional information (such as " forward ") and mark human eye discriminant information (" watching attentively ") are used as, to initial Model is trained.By repeatedly adjusting the parameter of initial model 504, final training obtains eye recognition model 505.Human eye is known The eye image of input can be identified in other model 505, determine the side of watching attentively for the human eye that the eye image of input is characterized To, and determine input the human eye that is characterized of eye image whether fixation object position.
The method that above-described embodiment of the application provides, by obtain include sample eye image and with sample human eye figure As the set of corresponding mark human eye directional information and the training sample for marking human eye discriminant information, and then utilize machine learning side Method corresponds to the sample eye image of input using the sample eye image of the training sample in training sample set as input Mark human eye directional information and mark human eye discriminant information as output, training obtain eye recognition model, to improve Generate the flexibility of the eye recognition model for human eye detection.
With further reference to Fig. 6, it illustrates the streams of one embodiment of the method for human eye for identification provided by the present application Journey 600.The method of the human eye for identification may comprise steps of:
Step 601, eye image to be identified is obtained.
In the present embodiment, (such as server shown in FIG. 1 or terminal are set the executive agent of the method for detecting face It is standby) facial image for detecting object can be obtained in several ways.For example, executive agent may include camera, this is taken the photograph As head can shoot the human eye of user, the eye image of the user is obtained as images to be recognized.For another example assuming to hold Row main body is server as shown in Figure 1, and executive agent can be by wired connection mode or radio connection, from such as Fig. 1 Shown in obtain eye image to be identified in terminal device.
In the present embodiment, the human eye of eye image to be identified characterization can be arbitrary people (such as using including camera shooting The user of the executive agent of head, or appear in the people etc. in the coverage of the camera communicated to connect with executive agent) Eyes.
Step 602, by eye image to be identified input eye recognition model trained in advance, obtain human eye directional information and Human eye discriminant information.
In the present embodiment, the eye image to be identified obtained based on step 601, above-mentioned executive agent can will be to be identified Eye image input eye recognition model trained in advance, obtains human eye directional information and human eye discriminant information.Wherein, human eye side The direction of gaze for the human eye that eye image to be identified characterizes is used to indicate to information, human eye discriminant information is used to indicate people to be identified Eye characterization image human eye whether fixation object position.
In the present embodiment, eye recognition model can be generated using the method as described in above-mentioned Fig. 2 embodiments 's.Specific generating process may refer to the associated description of Fig. 2 embodiments, and details are not described herein.
Optionally, after obtaining human eye directional information and human eye discriminant information, above-mentioned executive agent can be based on gained The human eye directional information arrived and human eye discriminant information generate preset various information (such as warning message) and in a variety of manners (such as written form, image format, audio form etc.) exports.
As an example, above-mentioned executive agent can be mobile phone, before eye image to be identified can be mounted on mobile phone Set the eye image for the user that camera is shot.After mobile phone gets the operation signal of solution lock screen of user, camera shooting Head shooting user eye image, mobile phone can will eye image input eye recognition model in, obtain human eye directional information " to Before " and human eye discriminant information " watching attentively ".Wherein, " watching attentively " is used to characterize the human eye of user and watches front camera in mobile phone attentively On.Then, mobile phone can (such as human eye directional information be " forward " and human eye discriminant information " note according to preset unlocking condition Depending on ") unlocking signal is generated to be unlocked to mobile phone screen.
Then method provided in this embodiment is inputted eye image to be identified pre- by obtaining eye image to be identified First trained eye recognition model, obtains human eye directional information and human eye discriminant information, true according to eye image to improve Determine human eye direction of gaze and differentiate human eye whether the accuracy of fixation object position.
With further reference to Fig. 7, as the realization to method shown in above-mentioned Fig. 2, this application provides one kind for generating people One embodiment of the device of eye identification model, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, device tool Body can be applied in various electronic equipments.
As shown in fig. 7, the device 700 for generating eye recognition model of the present embodiment includes:Acquiring unit 701, matches It sets for obtaining training sample set, wherein training sample includes sample eye image, and corresponding with sample eye image Human eye directional information and mark human eye discriminant information are marked, mark human eye directional information is used to indicate sample eye image characterization The direction of gaze of human eye, mark human eye discriminant information be used to indicate sample eye image characterization human eye whether fixation object position It sets;Training unit 702 is configured to utilize machine learning method, by the sample human eye of the training sample in training sample set Image as input, using the corresponding mark human eye directional information of the sample eye image of input and mark human eye discriminant information as Output, training obtain eye recognition model.
In the present embodiment, acquiring unit 701 can by wired connection mode or radio connection from long-range or Training sample set is obtained from local.Wherein, training sample may include sample eye image, and with sample eye image pair The mark human eye directional information answered and mark human eye discriminant information.Mark human eye directional information can serve to indicate that sample human eye figure As the direction of gaze of the human eye of characterization.Direction of gaze can include but is not limited to following at least one:Upwards, downwards, to the left, to It is right, forward, invalid direction (such as eye closing).Mark human eye discriminant information can serve to indicate that the human eye of sample eye image characterization Whether fixation object position.Wherein, target location can be shoot eye image camera where position or other have The position (such as a certain area of space) that technical staff specifies
In the present embodiment, the training sample set obtained based on acquiring unit 701, above-mentioned training unit 702 can profits With machine learning method, using the sample eye image of the training sample in training sample set as input, by the sample of input As output, training obtains eye recognition mould for the corresponding mark human eye directional information of eye image and mark human eye discriminant information Type.Wherein, above-mentioned eye recognition model can be that technical staff is based on existing artificial neural network (such as convolutional Neural net Network, Recognition with Recurrent Neural Network etc.) carry out eye image identification model obtained from training with supervising.Also, above-mentioned artificial neuron Network can have existing various neural network structures (such as DenseBox, VGGNet, ResNet, SegNet etc.).
In some optional realization methods of the present embodiment, training unit 702 may include:Extraction module, configuration are used In the initial neural network of extraction;Training module is configured to carry out following training step:By at least one in training sample set A sample eye image is input to initial neural network, obtains each sample eye image at least one sample eye image Corresponding human eye directional information and human eye discriminant information;By each sample eye image pair at least one sample eye image The human eye directional information answered and human eye discriminant information respectively with corresponding mark human eye directional information and mark human eye discriminant information It is compared, determines whether initial neural network reaches preset optimization aim according to comparison result;In response to determining initial god Reach optimization aim through network, initial neural network is determined as to the eye recognition model of training completion.
In some optional realization methods of the present embodiment, training unit 702 can be further configured to:In response to It determines that initial neural network is not up to optimization aim, adjusts the network parameter of initial neural network, and use unworn instruction Practice sample composition training sample set to close, continues to execute training step.
In some optional realization methods of the present embodiment, training module may include:First determination sub-module, configuration For being based on preset first-loss function and the second loss function, the functional value and the second loss letter of first-loss function are determined Several functional value, wherein the functional value of first-loss function is for characterizing the sample eye image pair for inputting initial neural network Difference degree between the human eye directional information answered and corresponding mark human eye directional information, the functional value of the second loss function are used It is inputted between the corresponding human eye discriminant information of sample eye image of initial neural network and mark human eye discriminant information in characterization Difference degree;Second determination sub-module is configured to be less than or equal in response to the functional value for determining first-loss function default First threshold, and the functional value of the second loss function be less than or equal to preset second threshold, determine that initial neural network reaches Optimization aim.
It is understood that all units described in the device 700 and each step phase in the method described with reference to figure 2 It is corresponding.As a result, above with respect to the operation of method description, the advantageous effect of feature and generation be equally applicable to device 700 and its In include unit, details are not described herein.
With further reference to Fig. 8, as the realization to method shown in above-mentioned Fig. 6, this application provides a kind of people for identification One embodiment of the device of eye, the device embodiment is corresponding with embodiment of the method shown in fig. 6, which can specifically answer For in various electronic equipments.
As shown in figure 8, the device 800 for generating eye recognition model of the present embodiment includes:Acquiring unit 801, matches It sets for obtaining eye image to be identified;Recognition unit 802 is configured to eye image to be identified input people trained in advance Eye identification model, obtains human eye directional information and human eye discriminant information, wherein eye recognition model is according in such as first aspect What any realization method generated, human eye directional information is used to indicate the direction of gaze of the human eye of eye image characterization to be identified, people Eye discriminant information be used to indicate the human eye of eye image to be identified characterization whether fixation object position.
It is understood that all units described in the device 800 and each step phase in the method described with reference to figure 6 It is corresponding.As a result, above with respect to the operation of method description, the advantageous effect of feature and generation be equally applicable to device 800 and its In include unit, details are not described herein.
Below with reference to Fig. 9, it illustrates suitable for for realizing that the electronic equipment of the embodiment of the present application is (such as shown in FIG. 1 Server or terminal device) computer system 900 structural schematic diagram.Electronic equipment shown in Fig. 9 is only an example, Any restrictions should not be brought to the function and use scope of the embodiment of the present application.
As shown in figure 9, computer system 900 includes central processing unit (CPU) 901, it can be read-only according to being stored in Program in memory (ROM) 902 or be loaded into the program in random access storage device (RAM) 903 from storage section 908 and Execute various actions appropriate and processing.In RAM 903, also it is stored with system 900 and operates required various programs and data. CPU 901, ROM 902 and RAM 903 are connected with each other by bus 904.Input/output (I/O) interface 905 is also connected to always Line 904.
It is connected to I/O interfaces 905 with lower component:Importation 906 including keyboard, mouse etc.;It is penetrated including such as cathode The output par, c 907 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 908 including hard disk etc.; And the communications portion 909 of the network interface card including LAN card, modem etc..Communications portion 909 via such as because The network of spy's net executes communication process.Driver 910 is also according to needing to be connected to I/O interfaces 905.Detachable media 911, such as Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 910, as needed in order to be read from thereon Computer program be mounted into storage section 908 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed by communications portion 909 from network, and/or from detachable media 911 are mounted.When the computer program is executed by central processing unit (CPU) 901, limited in execution the present processes Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or Computer-readable medium either the two arbitrarily combines.Computer-readable medium for example can be --- but it is unlimited In --- electricity, system, device or the device of magnetic, optical, electromagnetic, infrared ray or semiconductor, or the arbitrary above combination.It calculates The more specific example of machine readable medium can include but is not limited to:Being electrically connected with one or more conducting wires, portable meter Calculation machine disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device or The above-mentioned any appropriate combination of person.In this application, can be any include computer-readable medium or storage program has Shape medium, the program can be commanded the either device use or in connection of execution system, device.And in the application In, computer-readable signal media may include in a base band or as the data-signal that a carrier wave part is propagated, wherein Carry computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but not limited to electric Magnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable Jie Any computer-readable medium other than matter, the computer-readable medium can be sent, propagated or transmitted for being held by instruction Row system, device either device use or program in connection.The program code for including on computer-readable medium It can transmit with any suitable medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc. or above-mentioned arbitrary conjunction Suitable combination.
The calculating of the operation for executing the application can be write with one or more programming languages or combinations thereof Machine program code, described program design language include object oriented program language-such as Java, Smalltalk, C+ +, further include conventional procedural programming language-such as " C " language or similar programming language.Program code can Fully to execute on the user computer, partly execute, executed as an independent software package on the user computer, Part executes or executes on a remote computer or server completely on the remote computer on the user computer for part. In situations involving remote computers, remote computer can pass through the network of any kind --- including LAN (LAN) Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service Provider is connected by internet).
Flow chart in attached drawing and block diagram, it is illustrated that according to the system of the various embodiments of the application, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part for a part for one module, program segment, or code of table, the module, program segment, or code includes one or more uses The executable instruction of the logic function as defined in realization.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, this is depended on the functions involved.Also it to note Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard The mode of part is realized.Described unit can also be arranged in the processor, for example, can be described as:A kind of processor packet Include acquiring unit and training unit.Wherein, the title of these units does not constitute the limit to the unit itself under certain conditions It is fixed, for example, receiving unit is also described as " obtaining the unit of training sample set ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be Included in electronic equipment described in above-described embodiment;Can also be individualism, and without be incorporated the electronic equipment in. Above computer readable medium carries one or more program, when said one or multiple programs are held by the electronic equipment When row so that the electronic equipment:Obtain training sample set, wherein training sample includes sample eye image, and and sample The corresponding mark human eye directional information of eye image and mark human eye discriminant information, mark human eye directional information are used to indicate sample The direction of gaze of the human eye of eye image characterization, the human eye that mark human eye discriminant information is used to indicate sample eye image characterization are No fixation object position;Using machine learning method, using the sample eye image of the training sample in training sample set as Input, using the corresponding mark human eye directional information of the sample eye image of input and mark human eye discriminant information as output, instruction Get eye recognition model.
In addition, when said one or multiple programs are executed by the electronic equipment, it is also possible that the electronic equipment:It obtains Take eye image to be identified;By eye image to be identified input eye recognition model trained in advance, human eye directional information is obtained With human eye discriminant information, wherein eye recognition model can be that the various embodiments described above are described for generating eye recognition model Method generate, human eye directional information is used to indicate the direction of gaze of the human eye of eye image to be identified characterization, human eye differentiates Information be used to indicate the human eye of eye image to be identified characterization whether fixation object position.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art Member should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature Other technical solutions of arbitrary combination and formation.Such as features described above has similar work(with (but not limited to) disclosed herein Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (12)

1. a kind of method for generating eye recognition model, including:
Obtain training sample set, wherein training sample includes sample eye image, and mark corresponding with sample eye image Human eye directional information and mark human eye discriminant information are noted, mark human eye directional information is used to indicate the people of sample eye image characterization Eye direction of gaze, mark human eye discriminant information be used to indicate sample eye image characterization human eye whether fixation object position;
It will using the sample eye image of the training sample in the training sample set as input using machine learning method As output, training obtains people for the corresponding mark human eye directional information of sample eye image of input and mark human eye discriminant information Eye identification model.
2. it is described to utilize machine learning method according to the method described in claim 1, wherein, it will be in the training sample set Each training sample sample eye image as input, by the sample eye image of input corresponding mark human eye direction letter As output, training obtains eye recognition model for breath and mark human eye discriminant information, including:
Extract initial neural network;
Execute following training step:At least one of training sample set sample eye image is input to initial nerve net Network, the corresponding human eye directional information of each sample eye image and human eye obtained at least one sample eye image differentiate letter Breath;By at least one sample eye image the corresponding human eye directional information of each sample eye image and human eye discriminant information It is compared respectively with corresponding mark human eye directional information and mark human eye discriminant information, initial god is determined according to comparison result Whether reach preset optimization aim through network;Reach the optimization aim in response to the initial neural network of determination, it will initial god It is determined as the eye recognition model of training completion through network.
3. it is described to utilize machine learning method according to the method described in claim 2, wherein, it will be in the training sample set Each training sample sample eye image as input, by the sample eye image of input corresponding mark human eye direction letter As output, training obtains eye recognition model, further includes for breath and mark human eye discriminant information:
It is not up to the optimization aim in response to the initial neural network of determination, adjusts the network parameter of initial neural network, and Training sample set is formed using unworn training sample, continues to execute the training step.
4. according to the method in claim 2 or 3, wherein each sample by least one sample eye image The corresponding human eye directional information of eye image and human eye discriminant information respectively with corresponding mark human eye directional information and mark people Eye discriminant information is compared, and determines whether initial neural network reaches preset optimization aim according to comparison result, including:
Based on preset first-loss function and the second loss function, the functional value and described of the first-loss function is determined The functional value of two loss functions, wherein the functional value of the first-loss function is for characterizing the sample for inputting initial neural network Difference degree between the corresponding human eye directional information of this eye image and corresponding mark human eye directional information, second damage The functional value for losing function is used to characterize the corresponding human eye discriminant information of sample eye image for inputting initial neural network and mark Difference degree between human eye discriminant information;
It is less than or equal to preset first threshold, and the second loss letter in response to the functional value of the determination first-loss function Several functional values is less than or equal to preset second threshold, determines that initial neural network reaches optimization aim.
5. a kind of method of human eye for identification, including:
Obtain eye image to be identified;
By the eye image input to be identified eye recognition model trained in advance, obtains human eye directional information and human eye differentiates Information, wherein the eye recognition model is the human eye side being generated according to the method as described in one of claim 1-4 The direction of gaze for the human eye that the eye image to be identified characterizes is used to indicate to information, the human eye discriminant information is used to indicate The human eye of the eye image to be identified characterization whether fixation object position.
6. a kind of device for generating eye recognition model, including:
Acquiring unit is configured to obtain training sample set, wherein training sample includes sample eye image, and and sample The corresponding mark human eye directional information of this eye image and mark human eye discriminant information, mark human eye directional information are used to indicate sample The direction of gaze of the human eye of this eye image characterization, mark human eye discriminant information are used to indicate the human eye of sample eye image characterization Whether fixation object position;
Training unit is configured to utilize machine learning method, by the sample people of the training sample in the training sample set Eye pattern picture makees the corresponding mark human eye directional information of the sample eye image of input and mark human eye discriminant information as input For output, training obtains eye recognition model.
7. device according to claim 6, wherein the training unit includes:
Extraction module is configured to extract initial neural network;
Training module is configured to carry out following training step:By at least one of training sample set sample human eye Image is input to initial neural network, obtains the corresponding human eye of each sample eye image at least one sample eye image Directional information and human eye discriminant information;By the corresponding human eye side of each sample eye image at least one sample eye image It is compared respectively with corresponding mark human eye directional information and mark human eye discriminant information to information and human eye discriminant information, root Determine whether initial neural network reaches preset optimization aim according to comparison result;Reach institute in response to the initial neural network of determination Optimization aim is stated, initial neural network is determined as to the eye recognition model of training completion.
8. device according to claim 7, wherein the training unit is further configured to:
It is not up to the optimization aim in response to the initial neural network of determination, adjusts the network parameter of initial neural network, and Training sample set is formed using unworn training sample, continues to execute the training step.
9. device according to claim 7 or 8, wherein the training module includes:
First determination sub-module is configured to be based on preset first-loss function and the second loss function, determines described first The functional value of the functional value of loss function and second loss function, wherein the functional value of the first-loss function is used for Characterization inputs the corresponding human eye directional information of sample eye image of initial neural network and corresponding mark human eye directional information Between difference degree, the functional value of second loss function is for characterizing the sample eye image for inputting initial neural network Difference degree between corresponding human eye discriminant information and mark human eye discriminant information;
Second determination sub-module is configured to be less than or equal to preset the in response to the functional value of the determination first-loss function One threshold value, and the functional value of second loss function is less than or equal to preset second threshold, determines that initial neural network reaches Optimization aim.
10. a kind of device of human eye for identification, including:
Acquiring unit is configured to obtain eye image to be identified;
Recognition unit is configured to, by the eye image input to be identified eye recognition model trained in advance, obtain human eye Directional information and human eye discriminant information, wherein the eye recognition model is according to the method as described in one of claim 1-4 It generates, the human eye directional information is used to indicate the direction of gaze of the human eye of the eye image characterization to be identified, the people Eye discriminant information be used to indicate the human eye of the eye image to be identified characterization whether fixation object position.
11. a kind of electronic equipment, including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are executed by one or more of processors so that one or more of processors are real The now method as described in any in claim 1-5.
12. a kind of computer-readable medium, is stored thereon with computer program, wherein the program is realized when being executed by processor Method as described in any in claim 1-5.
CN201810286481.5A 2018-03-30 2018-03-30 Method and device for generating human eye recognition model Active CN108491823B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810286481.5A CN108491823B (en) 2018-03-30 2018-03-30 Method and device for generating human eye recognition model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810286481.5A CN108491823B (en) 2018-03-30 2018-03-30 Method and device for generating human eye recognition model

Publications (2)

Publication Number Publication Date
CN108491823A true CN108491823A (en) 2018-09-04
CN108491823B CN108491823B (en) 2021-12-24

Family

ID=63317658

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810286481.5A Active CN108491823B (en) 2018-03-30 2018-03-30 Method and device for generating human eye recognition model

Country Status (1)

Country Link
CN (1) CN108491823B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784304A (en) * 2019-01-29 2019-05-21 北京字节跳动网络技术有限公司 Method and apparatus for marking dental imaging
CN109803450A (en) * 2018-12-12 2019-05-24 平安科技(深圳)有限公司 Wireless device and computer connection method, electronic device and storage medium
CN109816589A (en) * 2019-01-30 2019-05-28 北京字节跳动网络技术有限公司 Method and apparatus for generating cartoon style transformation model
CN110136714A (en) * 2019-05-14 2019-08-16 北京探境科技有限公司 Natural interaction sound control method and device
CN110188833A (en) * 2019-06-04 2019-08-30 北京字节跳动网络技术有限公司 Method and apparatus for training pattern
CN111105008A (en) * 2018-10-29 2020-05-05 富士通株式会社 Model training method, data recognition method and data recognition device
CN111414851A (en) * 2020-03-19 2020-07-14 上海交通大学 Single-camera fixation detection method without light supplement and calibration based on iris shape

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101466305A (en) * 2006-06-11 2009-06-24 沃尔沃技术公司 Method and apparatus for determining and analyzing a location of visual interest
CN103067662A (en) * 2013-01-21 2013-04-24 天津师范大学 Self-adapting sightline tracking system
CN104951084A (en) * 2015-07-30 2015-09-30 京东方科技集团股份有限公司 Eye-tracking method and device
CN106371566A (en) * 2015-07-24 2017-02-01 由田新技股份有限公司 Correction module, method and computer readable recording medium for eye tracking
CN106909220A (en) * 2017-02-21 2017-06-30 山东师范大学 A kind of sight line exchange method suitable for touch-control
CN107563123A (en) * 2017-09-27 2018-01-09 百度在线网络技术(北京)有限公司 Method and apparatus for marking medical image
CN107679525A (en) * 2017-11-01 2018-02-09 腾讯科技(深圳)有限公司 Image classification method, device and computer-readable recording medium
CN107679451A (en) * 2017-08-25 2018-02-09 百度在线网络技术(北京)有限公司 Establish the method, apparatus, equipment and computer-readable storage medium of human face recognition model

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101466305A (en) * 2006-06-11 2009-06-24 沃尔沃技术公司 Method and apparatus for determining and analyzing a location of visual interest
CN103067662A (en) * 2013-01-21 2013-04-24 天津师范大学 Self-adapting sightline tracking system
CN106371566A (en) * 2015-07-24 2017-02-01 由田新技股份有限公司 Correction module, method and computer readable recording medium for eye tracking
CN104951084A (en) * 2015-07-30 2015-09-30 京东方科技集团股份有限公司 Eye-tracking method and device
CN106909220A (en) * 2017-02-21 2017-06-30 山东师范大学 A kind of sight line exchange method suitable for touch-control
CN107679451A (en) * 2017-08-25 2018-02-09 百度在线网络技术(北京)有限公司 Establish the method, apparatus, equipment and computer-readable storage medium of human face recognition model
CN107563123A (en) * 2017-09-27 2018-01-09 百度在线网络技术(北京)有限公司 Method and apparatus for marking medical image
CN107679525A (en) * 2017-11-01 2018-02-09 腾讯科技(深圳)有限公司 Image classification method, device and computer-readable recording medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111105008A (en) * 2018-10-29 2020-05-05 富士通株式会社 Model training method, data recognition method and data recognition device
CN109803450A (en) * 2018-12-12 2019-05-24 平安科技(深圳)有限公司 Wireless device and computer connection method, electronic device and storage medium
CN109784304A (en) * 2019-01-29 2019-05-21 北京字节跳动网络技术有限公司 Method and apparatus for marking dental imaging
CN109816589A (en) * 2019-01-30 2019-05-28 北京字节跳动网络技术有限公司 Method and apparatus for generating cartoon style transformation model
CN110136714A (en) * 2019-05-14 2019-08-16 北京探境科技有限公司 Natural interaction sound control method and device
CN110188833A (en) * 2019-06-04 2019-08-30 北京字节跳动网络技术有限公司 Method and apparatus for training pattern
CN110188833B (en) * 2019-06-04 2021-06-18 北京字节跳动网络技术有限公司 Method and apparatus for training a model
CN111414851A (en) * 2020-03-19 2020-07-14 上海交通大学 Single-camera fixation detection method without light supplement and calibration based on iris shape

Also Published As

Publication number Publication date
CN108491823B (en) 2021-12-24

Similar Documents

Publication Publication Date Title
CN108491823A (en) Method and apparatus for generating eye recognition model
CN109902659B (en) Method and apparatus for processing human body image
CN108898185A (en) Method and apparatus for generating image recognition model
CN108509915A (en) The generation method and device of human face recognition model
CN108985257A (en) Method and apparatus for generating information
CN108491809A (en) The method and apparatus for generating model for generating near-infrared image
CN108363995A (en) Method and apparatus for generating data
CN108776786A (en) Method and apparatus for generating user's truth identification model
CN108470328A (en) Method and apparatus for handling image
CN110503703A (en) Method and apparatus for generating image
CN109086719A (en) Method and apparatus for output data
CN108133201A (en) Face character recognition methods and device
CN108509892A (en) Method and apparatus for generating near-infrared image
CN109344752A (en) Method and apparatus for handling mouth image
CN108280413A (en) Face identification method and device
CN109472264A (en) Method and apparatus for generating object detection model
CN108876858A (en) Method and apparatus for handling image
CN109858444A (en) The training method and device of human body critical point detection model
CN110009059A (en) Method and apparatus for generating model
CN108062544A (en) For the method and apparatus of face In vivo detection
CN109241934A (en) Method and apparatus for generating information
CN108509890A (en) Method and apparatus for extracting information
CN108509921A (en) Method and apparatus for generating information
CN108491812A (en) The generation method and device of human face recognition model
CN108133197A (en) For generating the method and apparatus of information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant