CN108416317A - Method and device for obtaining information - Google Patents
Method and device for obtaining information Download PDFInfo
- Publication number
- CN108416317A CN108416317A CN201810226172.9A CN201810226172A CN108416317A CN 108416317 A CN108416317 A CN 108416317A CN 201810226172 A CN201810226172 A CN 201810226172A CN 108416317 A CN108416317 A CN 108416317A
- Authority
- CN
- China
- Prior art keywords
- eye
- facial image
- sample
- angle
- infrared
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present application discloses the method and device for obtaining information.One specific implementation mode of this method includes:Obtain near-infrared facial image;Above-mentioned near-infrared facial image is imported to eye-angle identification model trained in advance, obtain the corresponding eye-angle information of above-mentioned near-infrared facial image, the corresponding eye-angle information of near-infrared facial image, above-mentioned eye-angle information are used to characterize the angle in eye gaze direction to above-mentioned eye-angle identification model for identification.This embodiment improves the accuracys for obtaining eye-angle information.
Description
Technical field
The invention relates to technical field of image processing, and in particular to the method and device for obtaining information.
Background technology
With the development of science and technology, the intelligent level of electronic equipment is also higher and higher.For accordingly being grasped by human eye
The electronic equipment of work, it is generally the case that electronic equipment can identify the direction of gaze of eyes under visible light, corresponding to execute
Operation.For example, determining whether eyes watch the camera of electronic equipment attentively by the direction of gaze of eyes, to realize to electronic equipment
The operations such as unlock.
Invention content
The purpose of the embodiment of the present application is to propose the method and device for obtaining information.
In a first aspect, the embodiment of the present application provides a kind of method for obtaining information, this method includes:It obtains close red
Outer facial image;Above-mentioned near-infrared facial image is imported to eye-angle identification model trained in advance, obtains above-mentioned near-infrared
The corresponding eye-angle information of facial image, the above-mentioned eye-angle identification model corresponding eye of near-infrared facial image for identification
Eyeball angle information, above-mentioned eye-angle information are used to characterize the angle in eye gaze direction.
In some embodiments, the above method further includes the steps that structure eye-angle identification model, above-mentioned structure eyes
The step of angle recognition model includes:Obtain multiple sample near-infrared facial images and above-mentioned multiple sample near-infrared facial images
In the corresponding sample eye-angle information of each sample near-infrared facial image, wherein sample eye-angle information is for characterizing
The angle in the corresponding eye gaze direction of sample near-infrared facial image;It will be every in above-mentioned multiple sample near-infrared facial images
A sample near-infrared facial image is as input, by each sample near-infrared people in above-mentioned multiple sample near-infrared facial images
For sample eye-angle information corresponding to face image as output, training obtains above-mentioned eye-angle identification model.
In some embodiments, above-mentioned each sample near-infrared face by above-mentioned multiple sample near-infrared facial images
Image is as input, by the sample corresponding to each sample near-infrared facial image in above-mentioned multiple sample near-infrared facial images
This eye-angle information obtains above-mentioned eye-angle identification model as output, training, including:Execute following training step:It will
Each sample near-infrared facial image in above-mentioned multiple sample near-infrared facial images, which is sequentially input to initial eye-angle, to be known
Other model obtains the pre- test sample corresponding to each sample near-infrared facial image in above-mentioned multiple sample near-infrared facial images
This eye-angle information, corresponding to each sample near-infrared facial image in above-mentioned multiple sample near-infrared facial images
Forecast sample eye-angle information is compared with the sample eye-angle information of the sample near-infrared facial image, is obtained above-mentioned
The recognition accuracy of initial eye-angle identification model, determines whether above-mentioned recognition accuracy is more than default accuracy rate threshold value, if
More than above-mentioned default accuracy rate threshold value, then the eye-angle that above-mentioned initial eye-angle identification model is completed as training is identified
Model.
In some embodiments, above-mentioned each sample near-infrared face by above-mentioned multiple sample near-infrared facial images
Image is as input, by the sample corresponding to each sample near-infrared facial image in above-mentioned multiple sample near-infrared facial images
This eye-angle information obtains above-mentioned eye-angle identification model as output, training, including:It is above-mentioned default in response to being not more than
Accuracy rate threshold value, adjusts the parameter of above-mentioned initial eye-angle identification model, and continues to execute above-mentioned training step.
In some embodiments, the multiple sample near-infrared facial images of above-mentioned acquisition and above-mentioned multiple sample near-infrared faces
The corresponding sample eye-angle information of each sample near-infrared facial image in image, including:Obtain multiple visible light face figures
The corresponding visible light sample eye-angle information of each visible light facial image in picture and above-mentioned multiple visible light facial images,
In, it is seen that light sample eye-angle information is used to characterize the angle in the corresponding eye gaze direction of visible light facial image;It will be upper
It states multiple visible light facial images and is converted to corresponding multiple sample near-infrared facial images, and by above-mentioned multiple visible light faces
The corresponding visible light sample eye-angle information of each visible light facial image is as the corresponding visible light facial image in image
Sample near-infrared facial image sample eye-angle information.
In some embodiments, the above method further includes the steps that the above-mentioned initial eye-angle identification model of structure, above-mentioned
The step of building above-mentioned initial eye-angle identification model include:It obtains multiple close red when human eye is look at multiple set angles
Outer facial image;Using machine learning method, each near-infrared facial image in above-mentioned multiple near-infrared facial images is made
For input, using the corresponding set angle of each near-infrared facial image in above-mentioned multiple near-infrared facial images as export,
Training obtains initial eye-angle identification model.
Second aspect, the embodiment of the present application provide a kind of device for obtaining information, which includes:Image receives
Unit, for obtaining near-infrared facial image;Angle information acquiring unit, it is advance for importing above-mentioned near-infrared facial image
Trained eye-angle identification model obtains the corresponding eye-angle information of above-mentioned near-infrared facial image, above-mentioned eye-angle
The corresponding eye-angle information of near-infrared facial image, above-mentioned eye-angle information are used to characterize eyes identification model for identification
The angle of direction of gaze.
In some embodiments, above-mentioned apparatus further includes eye-angle identification model construction unit, for building eyes angle
Identification model is spent, above-mentioned eye-angle identification model construction unit includes:Sample information obtains subelement, for obtaining multiple samples
The corresponding sample of each sample near-infrared facial image in this near-infrared facial image and above-mentioned multiple sample near-infrared facial images
This eye-angle information, wherein sample eye-angle information is for characterizing the corresponding eye gaze of sample near-infrared facial image
The angle in direction;Eye-angle identification model builds subelement, and being used for will be every in above-mentioned multiple sample near-infrared facial images
A sample near-infrared facial image is as input, by each sample near-infrared people in above-mentioned multiple sample near-infrared facial images
For sample eye-angle information corresponding to face image as output, training obtains above-mentioned eye-angle identification model.
In some embodiments, above-mentioned eye-angle identification model structure subelement includes:Eye-angle identification model structure
Block is modeled, for sequentially inputting each sample near-infrared facial image in above-mentioned multiple sample near-infrared facial images to first
Beginning eye-angle identification model obtains each sample near-infrared facial image institute in above-mentioned multiple sample near-infrared facial images
Corresponding forecast sample eye-angle information, by each sample near-infrared face in above-mentioned multiple sample near-infrared facial images
Forecast sample eye-angle information and the sample eye-angle information of the sample near-infrared facial image corresponding to image carry out
Compare, obtain the recognition accuracy of above-mentioned initial eye-angle identification model, it is default to determine whether above-mentioned recognition accuracy is more than
Accuracy rate threshold value is then completed above-mentioned initial eye-angle identification model as training if more than above-mentioned default accuracy rate threshold value
Eye-angle identification model.
In some embodiments, above-mentioned eye-angle identification model structure subelement includes:Parameter adjustment module, in response to
No more than above-mentioned default accuracy rate threshold value, the parameter for adjusting above-mentioned initial eye-angle identification model, and continue to execute
State training step.
In some embodiments, above-mentioned sample information acquisition subelement includes:Visible light sample information acquisition module, is used for
It is corresponding visible to obtain each visible light facial image in multiple visible light facial images and above-mentioned multiple visible light facial images
Light sample eye-angle information, wherein visible light sample eye-angle information is for characterizing the corresponding eye of visible light facial image
The angle of eyeball direction of gaze;Sample information conversion module is corresponding more for being converted to above-mentioned multiple visible light facial images
A sample near-infrared facial image, and each visible light facial image in above-mentioned multiple visible light facial images is corresponding visible
Sample eye-angle of the light sample eye-angle information as the sample near-infrared facial image of the corresponding visible light facial image
Information.
In some embodiments, above-mentioned apparatus further includes initial eye-angle identification model construction unit, for building
Initial eye-angle identification model is stated, above-mentioned initial eye-angle identification model construction unit includes:Near-infrared facial image obtains
Subelement is taken, for obtaining multiple near-infrared facial images when human eye is look at multiple set angles;Initial eye-angle is known
Other model construction subelement, for utilizing machine learning method, by each near-infrared in above-mentioned multiple near-infrared facial images
Facial image is as input, by the corresponding set angle of each near-infrared facial image in above-mentioned multiple near-infrared facial images
As output, training obtains initial eye-angle identification model.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, including:One or more processors;Memory,
For storing one or more programs, near-infrared camera, for obtaining near-infrared image;When said one or multiple program quilts
When said one or multiple processors execute so that said one or multiple processors execute above-mentioned first aspect for obtaining
The method of information.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program,
It is characterized in that, which realizes the method for obtaining information of above-mentioned first aspect when being executed by processor.
Method and device provided by the embodiments of the present application for obtaining information imports the near-infrared facial image of acquisition
Eye-angle identification model obtains eye-angle information, avoids interference of the brightness to acquisition eye-angle information, improves and obtain
Take the accuracy of eye-angle information.
Description of the drawings
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart according to one embodiment of the method for obtaining information of the application;
Fig. 3 is the flow chart according to one embodiment of the eye-angle identification model training method of the application;
Fig. 4 is the schematic diagram according to an application scenarios of the method for obtaining information of the application;
Fig. 5 is the structural schematic diagram according to one embodiment of the device for obtaining information of the application;
Fig. 6 is adapted for the system structure diagram of the electronic equipment for realizing the embodiment of the present application.
Specific implementation mode
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, is illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the method for obtaining information that can apply the embodiment of the present application or the device for obtaining information
Exemplary system architecture 100.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 provide communication link medium.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be interacted by network 104 with server 105 with using terminal equipment 101,102,103, to receive or send out
Send message etc..Near-infrared camera for obtaining near-infrared image can be installed on terminal device 101,102,103, be used for
Obtain the visible image capturing head of visible images and various image processing applications, such as Visual image processing application, near-infrared
Image processing application, image choose application and picture editting's application etc..
Terminal device 101,102,103 can be hardware, can also be software.When terminal device 101,102,103 is hard
Can be the various electronic equipments that there is display screen and visible images and near-infrared image is supported to acquire when part, including but
It is not limited to smart mobile phone, tablet computer, pocket computer on knee and desktop computer etc..When terminal device 101,102,
103 when being software, may be mounted in above-mentioned cited electronic equipment.Multiple softwares or software module may be implemented into it
(such as providing Distributed Services), can also be implemented as single software or software module.It is not specifically limited herein.
Server 105 can be to provide the server of various services, such as be acquired to terminal device 101,102,103 close
Infrared face image carries out the server of image procossing.Server can carry out the data such as the near-infrared facial image that receives
The processing such as analysis, to obtain the eye-angle information of corresponding near-infrared facial image.
It should be noted that the embodiment of the present application provided for obtain information method can by terminal device 101,
102, it 103 is individually performed, or can also jointly be executed by terminal device 101,102,103 and server 105.Correspondingly, it uses
It can be set in terminal device 101,102,103, can also be set in server 105 in the device for obtaining information.
It should be noted that server can be hardware, can also be software.When server is hardware, may be implemented
At the distributed server cluster that multiple servers form, individual server can also be implemented as.It, can when server is software
To be implemented as multiple softwares or software module (such as providing Distributed Services), single software or software can also be implemented as
Module.It is not specifically limited herein.
It should be understood that the number of the terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, the flow of one embodiment of the method for obtaining information according to the application is shown
200.The method for being used to obtain information includes the following steps:
Step 201, near-infrared facial image is obtained.
In the present embodiment, the executive agent (such as terminal device shown in FIG. 1) of the method for obtaining information can be with
Near-infrared man face image acquiring is obtained by wired connection mode or radio connection.Wherein, near-infrared facial image can
To be the image acquired by the near-infrared camera on terminal device 101,102,103, can also be it will be seen that light face figure
The image as obtained from being converted through near-infrared.It should be pointed out that above-mentioned radio connection can include but is not limited to 3G/4G
Connection, WiFi connections, bluetooth connection, WiMAX connections, Zigbee connections, UWB (ultra wideband) connections and other
Currently known or exploitation in the future radio connection.
It in practice, may be due to shooting angle when user operates terminal device 101,102,103 by eyes
Or the reasons such as environmental light brightness is low, cause the visible light facial image that terminal device 101,102,103 not easily passs through acquisition accurate
It identifies eye-angle information, and then can not accurately be operated by eye-angle information.For example, under subdued light conditions,
When being unlocked to terminal device 101,102,103 by the eye-angle information of user and (can also be delivery operation etc.), eventually
The visible light facial image that end equipment 101,102,103 acquires is not easy to identify correct eye-angle information, and then can not hold
Row unlock operation.
For this purpose, the application can obtain near-infrared by the near-infrared camera on terminal device 101,102,103 first
Facial image.
Step 202, above-mentioned near-infrared facial image is imported to eye-angle identification model trained in advance, is obtained above-mentioned close
The corresponding eye-angle information of infrared face image.
After obtaining near-infrared facial image, can nearly infrared face image it imported into eye-angle identification model.Eye
Eyeball angle recognition model can carry out image procossing etc. to infrared face image, to obtain eyes figure in corresponding infrared face image
The eye-angle information of picture.Wherein, above-mentioned eye-angle identification model can be used for identifying the corresponding eye of near-infrared facial image
Eyeball angle information, above-mentioned eye-angle information can be used for characterizing the angle in eye gaze direction.
In some optional realization methods of the present embodiment, the above method can also include that structure eye-angle identifies mould
The step of the step of type, above-mentioned structure eye-angle identification model, may comprise steps of:
The first step obtains each sample in multiple sample near-infrared facial images and above-mentioned multiple sample near-infrared facial images
The corresponding sample eye-angle information of this near-infrared facial image.
Eye-angle identification model in order to obtain can obtain multiple sample near-infrared facial images and each sample first
The sample eye-angle information of near-infrared facial image.Wherein, sample eye-angle information is for characterizing sample near-infrared face
The angle in the corresponding eye gaze direction of image.Multiple sample near-infrared facial images include feelings of the eyes towards all directions
Condition.Corresponding, sample eye-angle information also includes angle information of the eyes towards all directions when.In general, angle information
Corresponding angle value can be:Using eyes as on the coordinate plane of original point structure, when eyes pass through origin towards corresponding straight line
With the angle of reference axis.Angle information can also be other forms, no longer repeat one by one herein.
Second step, using each sample near-infrared facial image in above-mentioned multiple sample near-infrared facial images as defeated
Enter, by the sample eye-angle corresponding to each sample near-infrared facial image in above-mentioned multiple sample near-infrared facial images
Information obtains above-mentioned eye-angle identification model as output, training.
Obtain the sample eye-angle of multiple sample near-infrared facial images and corresponding each sample near-infrared facial image
After information, can using each sample near-infrared facial image in multiple sample near-infrared facial images as the input of model,
Sample eye-angle information corresponding to each sample near-infrared facial image in multiple sample near-infrared facial images is made
For the output of model, training obtains above-mentioned eye-angle identification model.Wherein, model can be deep learning network or other classes
The learning network of type, no longer repeats one by one herein.
After obtaining eye-angle information, executive agent can accordingly be operated according to eye-angle information, herein
No longer repeat one by one.
With further reference to Fig. 3, it illustrates an implementations according to the eye-angle identification model training method of the application
The flow 300 of example.The flow 300 of the eye-angle identification model training method, includes the following steps:
Step 301, it obtains each in multiple sample near-infrared facial images and above-mentioned multiple sample near-infrared facial images
The corresponding sample eye-angle information of sample near-infrared facial image.
In the present embodiment, (such as terminal shown in FIG. 1 is set the executive agent of eye-angle identification model training method
It is standby) multiple sample near-infrared facial images and above-mentioned multiple samples can be obtained by wired connection mode or radio connection
The corresponding sample eye-angle information of each sample near-infrared facial image in this near-infrared facial image.Wherein, sample eyes
Angle information is used to characterize the angle in the corresponding eye gaze direction of sample near-infrared facial image.
Eye-angle identification model in order to obtain can obtain multiple sample near-infrared facial images and each sample first
The sample eye-angle information of near-infrared facial image.Wherein, sample eye-angle information is for characterizing sample near-infrared face
The angle in the corresponding eye gaze direction of image.Multiple sample near-infrared facial images include feelings of the eyes towards all directions
Condition.Corresponding, sample eye-angle information also includes angle information of the eyes towards all directions when.In general, angle information
Corresponding angle value can be:Using eyes as on the coordinate plane of original point structure, when eyes pass through origin towards corresponding straight line
With the angle of reference axis.Angle information can also be other forms, no longer repeat one by one herein.
In some optional realization methods of the present embodiment, multiple sample near-infrared facial images of above-mentioned acquisition and above-mentioned
The corresponding sample eye-angle information of each sample near-infrared facial image in multiple sample near-infrared facial images may include
Following steps:
The first step obtains each visible light face in multiple visible light facial images and above-mentioned multiple visible light facial images
The corresponding visible light sample eye-angle information of image.
In practice, what user was often got by terminal device is visible images.Therefore, the quantity of near-infrared image
The typically less than quantity of visible images.And existing method can obtain eye-angle information by visible light facial image.
In order to which training obtains eye-angle identification model, the present embodiment can obtain multiple visible light facial images and above-mentioned multiple visible
The corresponding visible light sample eye-angle information of each visible light facial image in light facial image.Wherein, it is seen that light sample eye
Eyeball angle information is used to characterize the angle in the corresponding eye gaze direction of visible light facial image.
Above-mentioned multiple visible light facial images are converted to corresponding multiple sample near-infrared facial images by second step, and
Using the corresponding visible light sample eye-angle information of each visible light facial image in above-mentioned multiple visible light facial images as
The sample eye-angle information of the sample near-infrared facial image of the corresponding visible light facial image.
The present embodiment can carry out image procossing by some existing image conversion methods to visible light facial image, will
Visible light facial image is converted to near-infrared facial image, image conversion method is not described in detail herein.Due to inciting somebody to action
During visible light facial image is converted to near-infrared facial image, the direction of eye gaze does not change.Therefore, may be used
Will be seen that the sample eye-angle information of light facial image as the sample near-infrared face of the corresponding visible light facial image
The sample eye-angle information of image.
Step 302, each sample near-infrared facial image in above-mentioned multiple sample near-infrared facial images is defeated successively
Enter to initial eye-angle identification model, obtains each sample near-infrared face in above-mentioned multiple sample near-infrared facial images
Forecast sample eye-angle information corresponding to image.
After obtaining multiple sample near-infrared facial images, each sample near-infrared facial image can be input to just respectively
Beginning eye-angle identification model obtains the forecast sample eye-angle information for corresponding to the sample near-infrared facial image.Wherein, just
Beginning eye-angle identification model can be unbred deep learning model or the deep learning model that training is not completed.Initially
In the corresponding network of eye-angle identification model, each layer of network can be provided with initial parameter.Initial parameter is in deep learning mould
It can constantly be adjusted in the training process of type.Initial eye-angle identification model can be various types of indisciplines or
The artificial neural network that training is not completed can also be the artificial neural network progress that a variety of indisciplines or training is not completed
Combine obtained model.For example, initial eye-angle identification model can be unbred convolutional neural networks, it can also
It is unbred Recognition with Recurrent Neural Network, can also be to unbred convolutional neural networks, unbred cycle nerve
Network and unbred full articulamentum are combined obtained model.
In some optional realization methods of the present embodiment, the above method further includes that the above-mentioned initial eye-angle of structure is known
The step of the step of other model, above-mentioned structure above-mentioned initial eye-angle identification model, may comprise steps of:
The first step obtains multiple near-infrared facial images when human eye is look at multiple set angles.
Eye-angle identification model in order to obtain needs first to obtain initial eye-angle identification model.And initial eyes angle
Degree identification model goes out the corresponding eye-angle information of near-infrared facial image for identification.For this purpose, the embodiment of the present application can be first
First obtain multiple near-infrared facial images when human eye is look at multiple set angles.Such as can be marked on blackboard more
A mark point, then so that the personnel that are taken watch each mark point attentively respectively;Then obtain be taken personnel watch attentively respectively it is each
Near-infrared facial image when mark point.It should be noted that during above-mentioned, the position phase of be taken personnel and blackboard
To fixation.Therefore, it also just can know that the eye-angle information between the personnel of being taken and mark point.
Second step, using machine learning method, by each near-infrared face figure in above-mentioned multiple near-infrared facial images
As being used as input, using the corresponding set angle of each near-infrared facial image in above-mentioned multiple near-infrared facial images as defeated
Go out, training obtains initial eye-angle identification model.
Initial eye-angle identification model is used to characterize the correspondence between near-infrared facial image and set angle.Make
For example, initial eye-angle identification model can be technical staff based on a large amount of near-infrared facial image and set angle
The correspondence for the correspondence for counting and pre-establishing, be stored between multiple near-infrared facial images and set angle
Table;Can also be technical staff be set in advance in based on the statistics to mass data it is in above-mentioned executive agent, to near-infrared
Facial image carries out numerical computations to obtain the calculation formula of the result of calculation for characterizing set angle.Above-mentioned executive agent
Machine learning method can be utilized, nearly infrared face image is as input, by the set angle of corresponding near-infrared facial image
As output, training obtains initial eye-angle identification model.Specifically, above-mentioned executive agent can use deep learning model
Or the models such as support vector machines (Support Vector Machine, SVM), nearly infrared face image is as the defeated of model
Enter, the model is carried out using machine learning method using the set angle of corresponding near-infrared facial image as the output of model
Training, obtains initial eye-angle identification model.Initial eye-angle identification model, which can be built, obtains near-infrared facial image
Accurate eye-angle information basis, for subsequently obtain training completion eye-angle identification model be ready.
It step 303, will be corresponding to each sample near-infrared facial image in above-mentioned multiple sample near-infrared facial images
Forecast sample eye-angle information be compared with the sample eye-angle information of the sample near-infrared facial image, obtain
State the recognition accuracy of initial eye-angle identification model.
Seen from the above description, each sample near-infrared facial image has corresponding sample eye-angle information.Herein,
Sample eye-angle information is the accurate angle information of corresponding sample near-infrared facial image.Sample near-infrared facial image
Forecast sample eye-angle information can be obtained after initial eye-angle identification model.Executive agent can be with comparison prediction sample eye
Ratio between eyeball angle information and sample eye-angle information, to obtain the knowledge of current initial eye-angle identification model
Other accuracy rate.For example, the corresponding angle of forecast sample eye-angle information is 89 degree, the corresponding angle of sample eye-angle information
It is 90 degree.Then recognition accuracy can be 89 degree/90 degree ≈ 99%.Herein it should be noted that recognition accuracy can also be
To the accurate of the Average Accuracy that is obtained after the recognition accuracies of multiple sample near-infrared facial images statistics or other forms
Rate no longer repeats one by one herein.
Step 304, determine whether above-mentioned recognition accuracy is more than default accuracy rate threshold value.
In the present embodiment, the recognition accuracy obtained based on initial eye-angle identification model, executive agent can be with
The recognition accuracy that initial eye-angle identification model obtains is compared with default accuracy rate threshold value.When recognition accuracy is big
When default accuracy rate threshold value, step 305 is executed;If recognition accuracy is not more than default accuracy rate threshold value, step 306 is executed.
For example, default accuracy rate threshold value can be 95%.
Step 305, the eye-angle identification model above-mentioned initial eye-angle identification model completed as training.
In the present embodiment, the recognition accuracy obtained in initial eye-angle identification model is more than default accuracy rate threshold value
In the case of, illustrate that the initial eye-angle identification model has had reached the accuracy rate requirement of setting.At this point, executive agent can
Using the eye-angle identification model for completing initial eye-angle identification model as training.
Step 306, the parameter of above-mentioned initial eye-angle identification model is adjusted.
In the present embodiment, the recognition accuracy obtained in initial eye-angle identification model is not more than default accuracy rate threshold
In the case of value, executive agent can adjust the parameter of initial eye-angle identification model, and return to step 302.Until instruction
Until practising the eye-angle identification model for being more than default accuracy rate threshold value to the recognition accuracy of sample near-infrared facial image.
It is a signal according to the application scenarios of the method for obtaining information of the present embodiment with continued reference to Fig. 4, Fig. 4
Figure.In the application scenarios of Fig. 4, the unlock operation of terminal device 102 requires user to watch near-infrared camera attentively.User is in half-light
Under the conditions of by terminal device 102 obtain user near-infrared facial image.Terminal device 102 is by the near-infrared face figure of acquisition
As importing eye-angle identification model, the angle of the direction of gaze of the corresponding user of the near-infrared facial image is obtained.When the angle
When degree meets unlock (can also be payment etc.) condition, unlocking screen operation is executed to terminal device 102.
The near-infrared facial image of acquisition is imported eye-angle and identifies mould by the method that above-described embodiment of the application provides
Type obtains eye-angle information, avoids interference of the brightness to acquisition eye-angle information, improves and obtain eye-angle information
Accuracy.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind for obtaining letter
One embodiment of the device of breath, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer
For in various electronic equipments.
As shown in figure 5, the device 500 for obtaining information of the present embodiment may include:Image receiving unit 501 and angle
Spend information acquisition unit 502.Wherein, image receiving unit 501 is for obtaining near-infrared facial image;Angle information acquiring unit
502, for above-mentioned near-infrared facial image to be imported to eye-angle identification model trained in advance, obtain above-mentioned near-infrared face
The corresponding eye-angle information of image, the above-mentioned eye-angle identification model corresponding eyes angle of near-infrared facial image for identification
Information is spent, above-mentioned eye-angle information is used to characterize the angle in eye gaze direction.
In some optional realization methods of the present embodiment, the device 500 for obtaining information can also include eyes
Angle recognition model construction unit (not shown), for building eye-angle identification model, above-mentioned eye-angle identifies mould
Type construction unit may include:Sample information obtains subelement (not shown) and eye-angle identification model builds subelement
(not shown).Wherein, sample information obtains subelement for obtaining multiple sample near-infrared facial images and above-mentioned multiple
The corresponding sample eye-angle information of each sample near-infrared facial image in sample near-infrared facial image, wherein sample eye
Eyeball angle information is used to characterize the angle in the corresponding eye gaze direction of sample near-infrared facial image;Eye-angle identification model
Build subelement be used for using each sample near-infrared facial image in above-mentioned multiple sample near-infrared facial images as input,
By the sample eye-angle letter corresponding to each sample near-infrared facial image in above-mentioned multiple sample near-infrared facial images
Breath obtains above-mentioned eye-angle identification model as output, training.
In some optional realization methods of the present embodiment, above-mentioned eye-angle identification model structure subelement can wrap
It includes:Eye-angle identification model builds module (not shown), and being used for will be in above-mentioned multiple sample near-infrared facial image
Each sample near-infrared facial image is sequentially input to initial eye-angle identification model, obtains above-mentioned multiple sample near-infrared people
The forecast sample eye-angle information corresponding to each sample near-infrared facial image in face image, above-mentioned multiple samples are close
The forecast sample eye-angle information corresponding to each sample near-infrared facial image in infrared face image is close with the sample
The sample eye-angle information of infrared face image is compared, and the identification for obtaining above-mentioned initial eye-angle identification model is accurate
Rate determines whether above-mentioned recognition accuracy then will be above-mentioned if more than above-mentioned default accuracy rate threshold value more than default accuracy rate threshold value
The eye-angle identification model that initial eye-angle identification model is completed as training.
In some optional realization methods of the present embodiment, above-mentioned eye-angle identification model structure subelement can wrap
It includes:Parameter adjustment module (not shown), in response to being not more than above-mentioned default accuracy rate threshold value, for adjusting above-mentioned initial eye
The parameter of eyeball angle recognition model, and continue to execute above-mentioned training step.
In some optional realization methods of the present embodiment, above-mentioned sample information obtains subelement and may include:It can be seen that
Light sample information acquisition module (not shown) and sample information conversion module (not shown).Wherein, it is seen that light sample
Data obtaining module is for obtaining each visible light people in multiple visible light facial images and above-mentioned multiple visible light facial images
The corresponding visible light sample eye-angle information of face image, wherein visible light sample eye-angle information is for characterizing visible light
The angle in the corresponding eye gaze direction of facial image;Sample information conversion module is used for above-mentioned multiple visible light facial images
Be converted to corresponding multiple sample near-infrared facial images, and by each visible light face in above-mentioned multiple visible light facial images
Sample near-infrared facial image of the corresponding visible light sample eye-angle information of image as the corresponding visible light facial image
Sample eye-angle information.
In some optional realization methods of the present embodiment, the device 500 for obtaining information can also include initial
Eye-angle identification model construction unit (not shown), it is above-mentioned first for building above-mentioned initial eye-angle identification model
Beginning eye-angle identification model construction unit includes:Near-infrared facial image obtains subelement (not shown) and initial eyes
Angle recognition model construction subelement (not shown).Near-infrared facial image obtains subelement and is look at for obtaining human eye
Multiple near-infrared facial images when multiple set angles;Initial eye-angle identification model structure subelement is used to utilize machine
Learning method will be above-mentioned multiple close using each near-infrared facial image in above-mentioned multiple near-infrared facial images as input
For the corresponding set angle of each near-infrared facial image in infrared face image as output, training obtains initial eye-angle
Identification model.
The present embodiment additionally provides a kind of electronic equipment, including:One or more processors;Memory, for storing one
A or multiple programs, near-infrared camera, for obtaining near-infrared image;When said one or multiple programs by said one or
When multiple processors execute so that said one or multiple processors execute the above-mentioned method for obtaining information.
The present embodiment additionally provides a kind of computer-readable medium, is stored thereon with computer program, which is handled
Device realizes the above-mentioned method for obtaining information when executing.
Below with reference to Fig. 6, it illustrates the computer systems 600 suitable for the electronic equipment for realizing the embodiment of the present application
Structural schematic diagram.Electronic equipment shown in Fig. 6 is only an example, to the function of the embodiment of the present application and should not use model
Shroud carrys out any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in
Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 608 and
Execute various actions appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data.
CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always
Line 604.
It is connected to I/O interfaces 605 with lower component:Importation 606 including keyboard, mouse etc.;Including such as liquid crystal
Show the output par, c 607 of device (LCD) etc. and loud speaker etc.;Storage section 608 including hard disk etc.;And including such as LAN
The communications portion 609 of the network interface card of card, modem etc..Communications portion 609 is executed via the network of such as internet
Communication process.Driver 610 is also according to needing to be connected to I/O interfaces 605.Detachable media 611, such as disk, CD, magneto-optic
Disk, semiconductor memory etc. are mounted on driver 610 as needed, in order to from the computer program root read thereon
According to needing to be mounted into storage section 608.Near-infrared camera 612 is connected to I/O interfaces 605 by various data-interfaces.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed by communications portion 609 from network, and/or from detachable media
611 are mounted.When the computer program is executed by central processing unit (CPU) 601, limited in execution the present processes
Above-mentioned function.
It should be noted that the above-mentioned computer-readable medium of the application can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two arbitrarily combines.Computer readable storage medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or arbitrary above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to:Electrical connection with one or more conducting wires, just
It takes formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type and may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In this application, can be any include computer readable storage medium or storage journey
The tangible medium of sequence, the program can be commanded the either device use or in connection of execution system, device.And at this
In application, computer-readable signal media may include in a base band or as the data-signal that a carrier wave part is propagated,
Wherein carry computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but unlimited
In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can
Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for
By instruction execution system, device either device use or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc. or above-mentioned
Any appropriate combination.
Flow chart in attached drawing and block diagram, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part for a part for one module, program segment, or code of table, the module, program segment, or code includes one or more uses
The executable instruction of the logic function as defined in realization.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, this is depended on the functions involved.Also it to note
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be arranged in the processor, for example, can be described as:A kind of processor packet
Include image receiving unit and angle information acquiring unit.Wherein, the title of these units is not constituted to this under certain conditions
The restriction of unit itself, for example, angle information acquiring unit is also described as " for being identified from near-infrared facial image
Go out the unit of eye-angle information ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be
Included in device described in above-described embodiment;Can also be individualism, and without be incorporated the device in.Above-mentioned calculating
Machine readable medium carries one or more program, when said one or multiple programs are executed by the device so that should
Device:Obtain near-infrared facial image;Above-mentioned near-infrared facial image is imported to eye-angle identification model trained in advance, is obtained
To the corresponding eye-angle information of above-mentioned near-infrared facial image, above-mentioned eye-angle identification model near-infrared face for identification
The corresponding eye-angle information of image, above-mentioned eye-angle information are used to characterize the angle in eye gaze direction.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art
Member should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Other technical solutions of arbitrary combination and formation.Such as features described above has similar work(with (but not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (14)
1. a kind of method for obtaining information, which is characterized in that the method includes:
Obtain near-infrared facial image;
The near-infrared facial image is imported to eye-angle identification model trained in advance, obtains the near-infrared facial image
Corresponding eye-angle information, the eye-angle identification model for identification believe by the corresponding eye-angle of near-infrared facial image
Breath, the eye-angle information are used to characterize the angle in eye gaze direction.
2. according to the method described in claim 1, it is characterized in that, the method further includes structure eye-angle identification model
The step of step, the structure eye-angle identification model includes:
Obtain each sample near-infrared people in multiple sample near-infrared facial images and the multiple sample near-infrared facial image
The corresponding sample eye-angle information of face image, wherein sample eye-angle information is for characterizing sample near-infrared facial image
The angle in corresponding eye gaze direction;
It, will be the multiple using each sample near-infrared facial image in the multiple sample near-infrared facial image as input
Sample eye-angle information corresponding to each sample near-infrared facial image in sample near-infrared facial image is used as output,
Training obtains the eye-angle identification model.
3. according to the method described in claim 2, it is characterized in that, described will be in the multiple sample near-infrared facial image
Each sample near-infrared facial image is as input, by each sample near-infrared in the multiple sample near-infrared facial image
For sample eye-angle information corresponding to facial image as output, training obtains the eye-angle identification model, including:
Execute following training step:By each sample near-infrared facial image in the multiple sample near-infrared facial image according to
It is secondary to be input to initial eye-angle identification model, obtain each sample near-infrared in the multiple sample near-infrared facial image
Forecast sample eye-angle information corresponding to facial image, by each sample in the multiple sample near-infrared facial image
The sample eyes angle of forecast sample eye-angle information and the sample near-infrared facial image corresponding to near-infrared facial image
Degree information is compared, and is obtained the recognition accuracy of the initial eye-angle identification model, is determined that the recognition accuracy is
It is no to be more than default accuracy rate threshold value, if more than the default accuracy rate threshold value, then the initial eye-angle identification model is made
The eye-angle identification model completed for training.
4. according to the method described in claim 3, it is characterized in that, described will be in the multiple sample near-infrared facial image
Each sample near-infrared facial image is as input, by each sample near-infrared in the multiple sample near-infrared facial image
For sample eye-angle information corresponding to facial image as output, training obtains the eye-angle identification model, including:
In response to being not more than the default accuracy rate threshold value, the parameter of the initial eye-angle identification model is adjusted, and continue
Execute the training step.
5. according to the method described in claim 2, it is characterized in that, the multiple sample near-infrared facial images and described of obtaining
The corresponding sample eye-angle information of each sample near-infrared facial image in multiple sample near-infrared facial images, including:
It is corresponding to obtain each visible light facial image in multiple visible light facial images and the multiple visible light facial image
Visible light sample eye-angle information, wherein visible light sample eye-angle information is for characterizing visible light facial image correspondence
Eye gaze direction angle;
The multiple visible light facial image is converted into corresponding multiple sample near-infrared facial images, and by it is the multiple can
The corresponding visible light sample eye-angle information of each visible light facial image is as the corresponding visible light in light-exposed facial image
The sample eye-angle information of the sample near-infrared facial image of facial image.
6. according to the method described in claim 3, it is characterized in that, the method further includes the structure initial eye-angle knowledge
The step of the step of other model, the structure initial eye-angle identification model includes:
Obtain multiple near-infrared facial images when human eye is look at multiple set angles;
Using machine learning method, using each near-infrared facial image in the multiple near-infrared facial image as input,
Using the corresponding set angle of each near-infrared facial image in the multiple near-infrared facial image as output, training obtains
Initial eye-angle identification model.
7. a kind of for obtaining the device of information, which is characterized in that described device includes:
Image receiving unit, for obtaining near-infrared facial image;
Angle information acquiring unit, for the near-infrared facial image to be imported to eye-angle identification model trained in advance,
Obtain the corresponding eye-angle information of the near-infrared facial image, eye-angle identification model near-infrared people for identification
The corresponding eye-angle information of face image, the eye-angle information are used to characterize the angle in eye gaze direction.
8. device according to claim 7, which is characterized in that described device further includes that eye-angle identification model structure is single
Member, for building eye-angle identification model, the eye-angle identification model construction unit includes:
Sample information obtains subelement, for obtaining multiple sample near-infrared facial images and the multiple sample near-infrared face
The corresponding sample eye-angle information of each sample near-infrared facial image in image, wherein sample eye-angle information is used for
Characterize the angle in the corresponding eye gaze direction of sample near-infrared facial image;
Eye-angle identification model builds subelement, for each sample in the multiple sample near-infrared facial image is close
Infrared face image is as input, by each sample near-infrared facial image institute in the multiple sample near-infrared facial image
Corresponding sample eye-angle information obtains the eye-angle identification model as output, training.
9. device according to claim 8, which is characterized in that the eye-angle identification model builds subelement and includes:
Eye-angle identification model builds module, for each sample in the multiple sample near-infrared facial image is closely red
Outer facial image is sequentially input to initial eye-angle identification model, is obtained every in the multiple sample near-infrared facial image
Forecast sample eye-angle information corresponding to a sample near-infrared facial image, by the multiple sample near-infrared facial image
In each sample near-infrared facial image corresponding to forecast sample eye-angle information and the sample near-infrared facial image
Sample eye-angle information be compared, obtain the recognition accuracy of the initial eye-angle identification model, determine described in
Whether recognition accuracy is more than default accuracy rate threshold value, if more than the default accuracy rate threshold value, then by the initial eyes angle
Spend the eye-angle identification model that identification model is completed as training.
10. device according to claim 9, which is characterized in that the eye-angle identification model builds subelement and includes:
Parameter adjustment module, in response to being not more than the default accuracy rate threshold value, for adjusting the initial eye-angle identification
The parameter of model, and continue to execute the training step.
11. device according to claim 8, which is characterized in that the sample information obtains subelement and includes:
Visible light sample information acquisition module, for obtaining multiple visible light facial images and the multiple visible light facial image
In the corresponding visible light sample eye-angle information of each visible light facial image, wherein visible light sample eye-angle information
Angle for characterizing the corresponding eye gaze direction of visible light facial image;
Sample information conversion module, for the multiple visible light facial image to be converted to corresponding multiple sample near-infrared people
Face image, and by the corresponding visible light sample eye-angle of each visible light facial image in the multiple visible light facial image
Sample eye-angle information of the information as the sample near-infrared facial image of the corresponding visible light facial image.
12. device according to claim 9, which is characterized in that described device further includes initial eye-angle identification model
Construction unit, for building the initial eye-angle identification model, the initial eye-angle identification model construction unit packet
It includes:
Near-infrared facial image obtains subelement, for obtaining multiple near-infrared faces when human eye is look at multiple set angles
Image;
Initial eye-angle identification model builds subelement, for utilizing machine learning method, by the multiple near-infrared face
Each near-infrared facial image in image is as input, by each near-infrared face in the multiple near-infrared facial image
The corresponding set angle of image obtains initial eye-angle identification model as output, training.
13. a kind of electronic equipment, including:
One or more processors;
Memory, for storing one or more programs;
Near-infrared camera, for obtaining near-infrared image;
When one or more of programs are executed by one or more of processors so that one or more of processors
Perform claim requires any method in 1 to 6.
14. a kind of computer-readable medium, is stored thereon with computer program, which is characterized in that the program is executed by processor
In Shi Shixian such as claim 1 to 6 it is any as described in method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810226172.9A CN108416317B (en) | 2018-03-19 | 2018-03-19 | Method and device for acquiring information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810226172.9A CN108416317B (en) | 2018-03-19 | 2018-03-19 | Method and device for acquiring information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108416317A true CN108416317A (en) | 2018-08-17 |
CN108416317B CN108416317B (en) | 2021-09-07 |
Family
ID=63132268
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810226172.9A Active CN108416317B (en) | 2018-03-19 | 2018-03-19 | Method and device for acquiring information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108416317B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111259698A (en) * | 2018-11-30 | 2020-06-09 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring image |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102309366A (en) * | 2011-07-21 | 2012-01-11 | 山东科技大学 | Control system and control method for controlling upper prosthesis to move by using eye movement signals |
JP2013123180A (en) * | 2011-12-12 | 2013-06-20 | Denso Corp | Monitoring device |
JP2016035654A (en) * | 2014-08-01 | 2016-03-17 | 広島県 | Sight line detecting device, and sight line input system |
CN107505707A (en) * | 2016-06-14 | 2017-12-22 | Fove股份有限公司 | Head mounted display, Line-of-sight detection systems |
-
2018
- 2018-03-19 CN CN201810226172.9A patent/CN108416317B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102309366A (en) * | 2011-07-21 | 2012-01-11 | 山东科技大学 | Control system and control method for controlling upper prosthesis to move by using eye movement signals |
JP2013123180A (en) * | 2011-12-12 | 2013-06-20 | Denso Corp | Monitoring device |
JP2016035654A (en) * | 2014-08-01 | 2016-03-17 | 広島県 | Sight line detecting device, and sight line input system |
CN107505707A (en) * | 2016-06-14 | 2017-12-22 | Fove股份有限公司 | Head mounted display, Line-of-sight detection systems |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111259698A (en) * | 2018-11-30 | 2020-06-09 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring image |
CN111259698B (en) * | 2018-11-30 | 2023-10-13 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring image |
Also Published As
Publication number | Publication date |
---|---|
CN108416317B (en) | 2021-09-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108446387A (en) | Method and apparatus for updating face registration library | |
CN108898185A (en) | Method and apparatus for generating image recognition model | |
CN109858445A (en) | Method and apparatus for generating model | |
CN107909065A (en) | The method and device blocked for detecting face | |
CN108830235A (en) | Method and apparatus for generating information | |
CN110110811A (en) | Method and apparatus for training pattern, the method and apparatus for predictive information | |
CN108985257A (en) | Method and apparatus for generating information | |
CN109086719A (en) | Method and apparatus for output data | |
CN108595628A (en) | Method and apparatus for pushed information | |
CN108491823A (en) | Method and apparatus for generating eye recognition model | |
CN109410253B (en) | For generating method, apparatus, electronic equipment and the computer-readable medium of information | |
CN108182412A (en) | For the method and device of detection image type | |
CN108062544A (en) | For the method and apparatus of face In vivo detection | |
CN108416326A (en) | Face identification method and device | |
CN109815365A (en) | Method and apparatus for handling video | |
CN108171206A (en) | information generating method and device | |
CN109241934A (en) | Method and apparatus for generating information | |
CN109887077A (en) | Method and apparatus for generating threedimensional model | |
CN107958247A (en) | Method and apparatus for facial image identification | |
CN108960110A (en) | Method and apparatus for generating information | |
CN108509921A (en) | Method and apparatus for generating information | |
CN108133197A (en) | For generating the method and apparatus of information | |
CN107704388A (en) | For the method and apparatus for the startup time for determining application | |
CN108427941A (en) | Method, method for detecting human face and device for generating Face datection model | |
CN110046571A (en) | The method and apparatus at age for identification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |