CN110008930A - The method and apparatus of animal face state for identification - Google Patents

The method and apparatus of animal face state for identification Download PDF

Info

Publication number
CN110008930A
CN110008930A CN201910304749.8A CN201910304749A CN110008930A CN 110008930 A CN110008930 A CN 110008930A CN 201910304749 A CN201910304749 A CN 201910304749A CN 110008930 A CN110008930 A CN 110008930A
Authority
CN
China
Prior art keywords
face
image
key point
presented
mouth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910304749.8A
Other languages
Chinese (zh)
Inventor
陈日伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910304749.8A priority Critical patent/CN110008930A/en
Publication of CN110008930A publication Critical patent/CN110008930A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Image Analysis (AREA)

Abstract

Embodiment of the disclosure discloses the method and apparatus of animal face state for identification.One specific embodiment of this method includes: the face-image for obtaining target animal, and the face organ presented to face-image carries out key point mark;Based on key point annotation results, determine that the instruction information for being used to indicate the state for the face organ that face-image is presented, instruction information include at least one of the following: the instruction information for being used to indicate the instruction information of the state of eyes of face-image presentation, the state for the mouth for being used to indicate face-image presentation.The embodiment can make the facial status information of identified animal more accurate, so that user can become more apparent upon the habit for the toy that it is raised, improve interest.

Description

The method and apparatus of animal face state for identification
Technical field
Embodiment of the disclosure is related to field of computer technology, and in particular to for identification the method for animal face state and Device.
Background technique
Universal with artificial intelligence technology with the development of science and technology, artificial intelligence technology can be applied to each neck Domain.For example, can be applied to the every field such as speech recognition, image recognition, smart home.The development of artificial intelligence technology is to use Family is each provided with great convenience in all respects.
In existing field of image recognition, generally include identification to facial expression, to the identification of user identity, to face The identification etc. of state.In the identification of associated facial state, the image for being normally based on user knows the facial state of user Not.
Summary of the invention
Embodiment of the disclosure proposes the method and apparatus of animal face state for identification.
In a first aspect, embodiment of the disclosure provides a kind of method of animal face state for identification, this method packet It includes: obtaining the face-image of target animal, and the face organ presented to face-image carries out key point mark;Based on pass Key point annotation results determine the instruction information for being used to indicate the state for the face organ that face-image is presented, and indicate packet It includes at least one of following: being used to indicate the instruction information of the state of the eyes of face-image presentation, is used to indicate face-image and be in The instruction information of the state of existing mouth.
In some embodiments, the face organ presented to face-image carries out key point mark, comprising: schemes face As being input to key point marking model trained in advance, obtains the face organ presented to face-image and carry out key point mark Annotation results, annotation results, which include at least one of the following:, to be used to indicate the key point annotation results of eye contour, is used to indicate The key point annotation results of mouth profile.
In some embodiments, training obtains key point marking model as follows: obtaining training sample set, training Sample set includes the key point markup information of sample animal face image, face organ corresponding with sample animal face image; Make sample animal face image as the key point markup information of input, face organ corresponding with sample animal face image Neural network is trained using machine learning algorithm for desired output, obtains key point marking model.
In some embodiments, instruction information includes being used to indicate eyes to open or the instruction information of closed state, crucial Point annotation results include being used to indicate the corresponding set of keypoints of eyes that face-image is presented, the eye that face-image is presented The corresponding set of keypoints of eyeball includes upper eyelid key point, palpebra inferior key point, two canthus key points;And it is based on key point mark Note is as a result, determine the instruction information for being used to indicate the state for the face organ that face-image is presented, comprising: determination is used to indicate First distance between the upper eyelid key point and palpebra inferior key point of the eyes that face-image is presented;Determination is used to indicate face Second distance between two canthus key points of the eyes that portion's image is presented;From pre-set animal eyes discrimination threshold Eyes discrimination threshold corresponding with animal indicated by face-image is selected, determines between first distance and second distance Whether one ratio is greater than selected eyes discrimination threshold out;Differentiate in response to determining that the first ratio is greater than selected eyes out Threshold value determines that the eye state that face-image is presented is eyes-open state;In response to determining that it is default that the first ratio is less than or equal to Threshold value determines that the eye state that face-image is presented is closed-eye state.
In some embodiments, instruction information includes being used to indicate the instruction information of mouth opening or closed state, crucial Point annotation results include being used to indicate the corresponding set of keypoints of mouth that face-image is presented, the mouth that face-image is presented Bar corresponding set of keypoints includes upper lip key point, lower lip key point, two corners of the mouth key points;And it is based on key point mark Note is as a result, determine the instruction information for being used to indicate the state for the face organ that face-image is presented, comprising: determination is used to indicate Third distance between the upper lip key point and lower lip key point of the mouth that face-image is presented;Determination is used to indicate face The 4th distance between two corners of the mouths of the mouth that portion's image is presented;It is selected from pre-set animal mouth discrimination threshold Mouth discrimination threshold corresponding with animal indicated by face-image determines the second ratio between third distance and the 4th distance Whether selected mouth discrimination threshold out is greater than;In response to determining that the second ratio is greater than selected mouth discrimination threshold out, Determine that the mouth states that face-image is presented are state of opening one's mouth;In response to determining what the second ratio went out less than or equal to selected by Mouth discrimination threshold determines that the mouth states that face-image is presented are state of shutting up.
Second aspect, embodiment of the disclosure provide a kind of device of animal face state for identification, the device packet Include: acquiring unit is configured to obtain the face-image of target animal, and the face organ presented to face-image carries out Key point mark;Determination unit is configured to based on key point annotation results, and determination is used to indicate the face that face-image is presented The instruction information of the state of portion's organ, instruction information include at least one of the following: the eyes for being used to indicate face-image presentation The instruction information of state, be used to indicate face-image presentation mouth state instruction information.
In some embodiments, acquiring unit is further configured to: face-image is input to key trained in advance Point marking model obtains the annotation results that the face organ presented to face-image carries out key point mark, annotation results packet It includes at least one of following: the key point mark for being used to indicate the key point annotation results of eye contour, being used to indicate mouth profile As a result.
In some embodiments, training obtains key point marking model as follows: obtaining training sample set, training Sample set includes the key point markup information of sample animal face image, face organ corresponding with sample animal face image; Make sample animal face image as the key point markup information of input, face organ corresponding with sample animal face image Neural network is trained using machine learning algorithm for desired output, obtains key point marking model.
In some embodiments, instruction information includes being used to indicate eyes to open or the instruction information of closed state, crucial Point annotation results include being used to indicate the corresponding set of keypoints of eyes that face-image is presented, the eye that face-image is presented The corresponding set of keypoints of eyeball includes upper eyelid key point, palpebra inferior key point, two canthus key points;And determination unit is into one Step is configured to: determination is used to indicate between the upper eyelid key point for the eyes that face-image is presented and palpebra inferior key point First distance;Determine the second distance being used to indicate between two canthus key points of the eyes that face-image is presented;From preparatory Eyes discrimination threshold corresponding with animal indicated by face-image is selected in the animal eyes discrimination threshold of setting, determines Whether the first ratio between one distance and second distance is greater than selected eyes discrimination threshold out;In response to determining the first ratio Value is greater than selected eyes discrimination threshold out, determines that the eye state that face-image is presented is eyes-open state;In response to true Fixed first ratio is less than or equal to preset threshold, determines that the eye state that face-image is presented is closed-eye state.
In some embodiments, instruction information includes being used to indicate the instruction information of mouth opening or closed state, crucial Point annotation results include being used to indicate the corresponding set of keypoints of mouth that face-image is presented, the mouth that face-image is presented Bar corresponding set of keypoints includes upper lip key point, lower lip key point, two corners of the mouth key points;And determination unit is into one Step is configured to: determination is used to indicate between the upper lip key point for the mouth that face-image is presented and lower lip key point Third distance;Determine the 4th distance being used to indicate between two corners of the mouths of the mouth that face-image is presented;From pre-set Mouth discrimination threshold corresponding with animal indicated by face-image is selected in animal mouth discrimination threshold, determines third distance And whether the 4th the second ratio between is greater than selected mouth discrimination threshold out;In response to determining that the second ratio is greater than Selected mouth discrimination threshold out determines that the mouth states that face-image is presented are state of opening one's mouth;In response to determining second Ratio is less than or equal to selected mouth discrimination threshold out, determines that the mouth states that face-image is presented are state of shutting up.
The third aspect, embodiment of the disclosure provide a kind of terminal device, which includes: one or more places Manage device;Storage device, for storing one or more programs;When one or more programs are executed by one or more processors, So that one or more processors realize the method as described in implementation any in first aspect.
Fourth aspect, embodiment of the disclosure provide a kind of computer-readable medium, are stored thereon with computer program, The method as described in implementation any in first aspect is realized when the computer program is executed by processor.
The method and apparatus for the state of animal face for identification that embodiment of the disclosure provides, it is to be identified dynamic by obtaining The face-image of object, the face organ presented to face-image carry out key point mark, are finally based on key point annotation results It determines the instruction information for the state for being used to indicate the face organ that face-image is presented, identified animal can be made Facial status information it is more accurate so that user can become more apparent upon the habit for the toy that it is raised, improve interest Taste.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the disclosure is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the disclosure can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the method for the state of animal face for identification of the disclosure;
Fig. 3 is showing for an application scenarios of the method for the state of animal face for identification according to an embodiment of the present disclosure It is intended to;
Fig. 4 is the flow chart according to another embodiment of the method for the state of animal face for identification of the disclosure;
Fig. 5 is the structural schematic diagram according to one embodiment of the device of the state of animal face for identification of the disclosure;
Fig. 6 is adapted for the structural schematic diagram for realizing the electronic equipment of embodiment of the disclosure.
Specific embodiment
The disclosure is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the disclosure can phase Mutually combination.The disclosure is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the method or animal face for identification of the state of animal face for identification of the disclosure The exemplary architecture 100 of the embodiment of the device of state.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105. Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
Various client applications can be installed on terminal device 101,102,103.Such as image processing class application, search Class application, the application of content share class, U.S. figure class application, the application of instant messaging class etc..Terminal device 101,102,103 can pass through Network 104 is interacted with server 105, to receive or send message etc..
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard When part, it can be the various electronic equipments that can receive user's operation, including but not limited to smart phone, tablet computer, above-knee Type portable computer and desktop computer etc..When terminal device 101,102,103 is software, may be mounted at above-mentioned listed In the electronic equipment of act.Multiple softwares or software module may be implemented into (such as providing the multiple soft of Distributed Services in it Part or software module), single software or software module also may be implemented into.It is not specifically limited herein.
Server 105 can be the background server for supporting the client application installed on terminal device 101,102,103. Server 105 can carry out key point mark to the face-image of target animal, be based on annotation results, and determination is used to indicate animal The instruction information of the state of the face organ presented.
It should be noted that server 105 can be hardware, it is also possible to software.When server is hardware, Ke Yishi The distributed server cluster of ready-made multiple server compositions, also may be implemented into individual server.When server is software, Multiple softwares or software module (such as providing multiple softwares of Distributed Services or software module) may be implemented into, it can also To be implemented as single software or software module.It is not specifically limited herein.
It should be noted that the method for animal face state can be by taking for identification provided by embodiment of the disclosure Business device 105 executes, and can also be executed by terminal device 101,102,103.Correspondingly, the device of animal face state for identification It can be set in server 105, also can be set in terminal device 101,102,103.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need It wants, can have any number of terminal device, network and server.Used in during obtaining facial expression information Data do not need in the case where long-range obtain, and above system framework can not include network, only include terminal device or service Device.
With continued reference to Fig. 2, it illustrates an implementations according to the method for the state of animal face for identification of the disclosure The process 200 of example.This for identification animal face state method the following steps are included:
Step 201, the face-image of target animal is obtained, and the face organ presented to face-image carries out key Point mark.
In the present embodiment, executing subject (such as end shown in FIG. 1 of the method for the above-mentioned state of animal face for identification The perhaps server 105 of end equipment 101,102,103) capture apparatus can be installed or connect with capture apparatus.Above-mentioned face Image is sent to above-mentioned executing subject after can shooting for the capture apparatus.Alternatively, the face-image may be to deposit in advance It stores up in local.Above-mentioned executing subject can obtain face by being used to indicate the routing information for the position that face-image is stored Image.Herein, which can be the face-image of animal, which can be for example kitten, doggie etc..
It in the present embodiment, can be to accessed face-image institute after getting the face-image of target animal The face organ of presentation carries out key point mark.Herein, above-mentioned that face organ's progress key point mark can specifically be marked The profile of facial each organ out.For example, eye contour, mouth profile etc. can be marked out.Herein, face organ's point is marked When, it can determine the characteristic point of face organ.This feature point for example may include extreme point, boundary point and interpolation point.Wherein, Extreme point is usually in the subrange of face only one definition, the typically point at the positions such as the corners of the mouth, canthus;Boundary point It is obtained usually uniformly to be extracted on the profile edge partly or wholly of face, such as lip outline point, eye contour point. Usually face part obtains interpolation point by other boundary point interpolations without the place of obvious textural characteristics, which for example may be used Think mouth central point and some points etc. being blocked.After the organ presented to face-image carries out key point mark, The corresponding toponym of each key point can also be marked out.
Specifically, can use ASM (Active Shape Model, active shape model) algorithm or AAM (Active Appearance Model, active appearance models) algorithm is to the positioning feature point of face-image.It is sharp first by taking ASM algorithm as an example The characteristic point image pattern of the face organ manually marked is trained ASM model, obtains feature point image.Then, root The position of matching characteristic point is searched on face-image according to the feature of feature point image.Finally, again according to prior shape model into Row correction, so that characteristic point position meets each organ shape constraint condition of face.Herein, above-mentioned face organ includes following At least one of: eyes, mouth.
Step 202, key point annotation results are based on, determine the state for being used to indicate the face organ that face-image is presented Instruction information.
In the present embodiment, it is closed according to the face organ presented in step 201 to the face-image of target animal The annotation results of key point mark, above-mentioned executing subject can determine the state for being used to indicate the face organ that face-image is presented Instruction information.Herein, the state for the face organ which is presented can specifically include at least one of following: eye State, the state of mouth of eyeball.The state of eyes can specifically include the state of opening or closed state;The state of mouth can wrap Include open configuration or closed state.
As a kind of implementation, above-mentioned instruction information may include being used to indicate eyes to open state or closed state Instruction information.Above-mentioned key point annotation results can specifically include the corresponding pass of eyes for being used to indicate that face-image is presented Key point set.The corresponding set of keypoints of the eyes that the face-image is presented can specifically include upper eyelid key point, lower eye Eyelid key point, eye head key point and canthus key point.The state for the eyes that above-mentioned determining face-image is presented specifically include with Lower step:
Step S11: above-mentioned executing subject preferred can determine the upper eyelid for being used to indicate the eyes that face-image is presented First distance between key point and palpebra inferior key point.Specifically, due under the key point and mark of mark upper eyelid profile The key point of eyelid profile usually have it is multiple, therefore, can determine respectively upper eyelid each key point and palpebra inferior it is every The distance between one key point.Then, it regard value maximum in identified distance as first distance.
In some optional implementations, in order to enable identified first distance is more extensive, in other words this One distance can be fallen into an appropriate range, can will mark the number and mark palpebra inferior of the key point of upper eyelid profile The number of the key point of profile is arranged to identical number.Then, for every in the key point for marking upper eyelid profile One key point determines the distance between the pipe key point and each key point of palpebra inferior, selects apart from the smallest value as first Sub- distance.To obtain the multiple first sub- distances.Then, using the average value of the multiple first sub- distance as above-mentioned first away from From.
Step S12: the second distance being used to indicate between two canthus key points of the eyes that face-image is presented is determined. Herein, what the mask method that canthus key point is normally based on extreme point marked out, in other words, each canthus is corresponding One key point.To can determine that face-image was presented by the distance between corresponding key point in two canthus The distance between two canthus of eyes.
Step S13 is selected from pre-set animal eyes discrimination threshold and is moved with indicated by animal face image It is selected to determine whether the first ratio between above-mentioned first distance and above-mentioned second distance is greater than for the corresponding eyes discrimination threshold of object The eyes discrimination threshold selected out.
Herein, can be previously stored with the classification of various animals in above-mentioned executing subject, and with each animal class Not corresponding eyes discrimination threshold.Since the volume of different animals is different, eyes size, mouth size are all different, by setting It is placed in eyes discrimination threshold corresponding to every a kind of animal, face corresponding with the animal of the category can be determined more accurately out The state for the eyes that portion's image is presented improves identification accuracy.Above-mentioned executing subject is being determined corresponding to face-image After animal category, the corresponding eyes discrimination threshold of the category is determined.Then, by the first ratio between first distance and second distance Value is compared with the eyes discrimination threshold.By comparing as a result, state to determine eyes that face-image is presented.Work as eye When eyeball is closed, the distance between upper eyelid and palpebra inferior are smaller;When eyes are opened, between upper eyelid and palpebra inferior away from From larger.To can accurately determine face based on above-mentioned first ratio to be compared with eyes discrimination threshold The state for the eyes that image is presented.
Step S14 is greater than selected eyes discrimination threshold out in response to above-mentioned first ratio of determination, determines target animal The eye state that is presented of face-image be eyes-open state.
Herein it is worth noting that, being presented one detected by detection mode using above-mentioned eye state The state of eyes, in addition one eye eyeball can use above-mentioned identical detection mode and be detected, and details are not described herein.
Step S15 is less than or equal to preset threshold in response to above-mentioned first ratio of determination, determines the face of above-mentioned target animal The eye state that portion's image is presented is closed-eye state.
As another implementation, above-mentioned instruction information includes the instruction letter for being used to indicate mouth opening or closed state Breath, key point annotation results include being used to indicate the corresponding set of keypoints of mouth that face-image is presented, face-image institute The corresponding set of keypoints of the mouth of presentation includes upper lip key point, lower lip key point, two corners of the mouth key points.Determine face The state of the mouth for the animal that image is presented specifically includes the following steps:
Step S21: the upper lip key point and lower lip key point for being used to indicate the mouth that face-image is presented are determined Between third distance.
Step S22: the 4th distance being used to indicate between two corners of the mouth key points of the mouth that face-image is presented is determined.
Step S23: it is selected from pre-set animal mouth discrimination threshold and the face-image of target animal meaning It is selected to determine whether the second ratio between third distance and the 4th distance is greater than for the corresponding mouth discrimination threshold of the animal shown Mouth discrimination threshold out.
Step S24: in response to determining that the second ratio is greater than selected mouth discrimination threshold out, the face of target animal is determined The mouth states for the animal that portion's image is presented are state of opening one's mouth.
Step S25: in response to determining that the second ratio is less than or equal to selected mouth discrimination threshold out, animal face is determined The mouth states for the animal that portion's image is presented are state of shutting up.
Wherein, the specific implementation of the state of the mouth for the animal that above-mentioned detection face-image is presented and its brought have Beneficial effect can refer to the specific steps of the eye state for the animal that detection face-image is presented, and details are not described herein.
With further reference to Fig. 3, it illustrates an applied fields of the method for the state of animal face for identification of the disclosure Jing Tu.
In application scenarios as shown in Figure 3, the face-image 301 for the kitten that capture apparatus will acquire is input to service Device 302.Server 302 can carry out key point mark to the face-image of kitten, obtain after receiving the face-image of kitten To the image 303 for being labeled with key point.Herein, it is labeled with the key point for being used to indicate mouth profile in image 303 and is used for Indicate the key point of eye contour.Then, based on the key point marked, determination is used to indicate what face-image 302 was presented The state of eyes is eyes-open state;The state for being used to indicate the mouth that face-image 302 is presented is state of opening one's mouth.
The method for the state of animal face for identification that embodiment of the disclosure provides, by the face for obtaining animal to be identified Portion's image, the face organ presented to face-image are carried out key point mark, are finally determined based on key point annotation results It is used to indicate the instruction information of the state for the face organ that face-image is presented, the face of identified animal can be made Status information is more accurate, so that user can become more apparent upon the habit for the toy that it is raised, improves interest.
With further reference to Fig. 4, it illustrates another implementations of the method for the state of animal face for identification of the disclosure The process 400 of example.The process 400 of the method for animal face state for identification, comprising the following steps:
Step 401, face-image is input to key point trained in advance and marks mould by the face-image for obtaining target animal Type obtains the annotation results that the face organ presented to face-image carries out key point mark.
In the present embodiment, in the present embodiment, the executing subject (example of the method for the above-mentioned state of animal face for identification Terminal device 101,102,103 as shown in Figure 1 perhaps server 105) capture apparatus or and capture apparatus can be installed Connection.Above-mentioned face-image is sent to above-mentioned executing subject after can shooting for the capture apparatus.Alternatively, the face-image Can be stored in advance in it is local.Above-mentioned executing subject can be by being used to indicate the path of the position that face-image is stored Acquisition of information face-image.Herein, the face-image can be animal face-image, the animal for example can for kitten, Doggie etc..
In the present embodiment, after getting the face-image of target animal, face-image can be input to preparatory instruction Experienced key point marking model.To obtain carrying out the annotation results of key point mark to the face organ that face-image is presented.
It is in advance based on a large amount of each classification is moved as an example, above-mentioned key point marking model can be technical staff The statistics of key point markup information corresponding to the face-image of object and the face-image with the animal and pre-establish, store There is the mapping table of multiple face-images with corresponding key point mark figure;Or it is based on preset training sample, benefit The model obtained after being trained with machine learning method to initial model (such as neural network).Herein, above-mentioned mark knot Fruit includes at least one of the following: the key point annotation results for being used to indicate eye contour, the key point for being used to indicate mouth profile Annotation results.
In some optional implementations of the present embodiment, above-mentioned key point marking model is trained as follows It arrives:
Step S41 obtains training sample set.
Herein, which includes sample animal face image, face corresponding with sample animal face image The key point markup information of organ.
Specifically, above-mentioned sample animal face image can include but is not limited to the face-image of dog, cat face-image, The face-image etc. of horse.Wherein, dog can be divided into multiple categories again, and the dog of different categories corresponds to different face-images.With The key point markup information of the corresponding each face organ of sample animal face image, which can specifically include, is used to indicate mouth profile Key point markup information, be used to indicate the key point markup information of eye contour.Wherein, it is used to indicate the key of mouth profile Point markup information can specifically include upper lip profile key point markup information, lower lip profile key point markup information, Liang Zui Angle key point markup information.The key point markup information for being used to indicate eye contour can specifically include the key of upper eyelid profile Put the key point markup information at markup information, the key point markup information of palpebra inferior profile, two canthus.
Step S42, using sample animal face image as input, corresponding with sample animal face image face organ Key point markup information is trained neural network using machine learning algorithm as desired output, obtains key point mark Model.
Herein, which can be neural network (such as convolutional neural networks, deep neural network) Deng.The neural network may include feature extraction layer and the first sub-network, the second sub-network.Wherein, feature extraction layer is for mentioning The feature of face-image is taken, characteristic pattern corresponding with the face-image of input is generated.This feature figure may include image texture, Shape, profile etc..First sub-network is connect with feature extraction layer, for being based on the extracted feature of feature extraction layer, to face Image carries out key point mark.Second sub-network is connect with the first sub-network, carries out key point mark to face-image for determining Error between the annotation results of note and the key point information of the face-image marked in advance.Said extracted layer, the first sub-network, Second sub-network can respectively include preset number convolutional layer, pond layer.
Execute following training step:
For the sample animal face image that training sample is concentrated, which is input to nerve net Network obtains the key point annotation results of the sample animal face image and is used to indicate the key point annotation results and in advance mark Key point information between error.
For acquired annotation results error corresponding with each sample face image, default loss function is constructed.The loss Function for example can be LOSS loss function.
Based on the loss function, determine whether the penalty values of the loss function reach default penalty values.It is reached in response to determination To default penalty values, determine that neural metwork training is completed, and the neural network that the training is completed is as key point marking model.
In response to determining not up to default penalty values, the parameter of neural network, such as number, the volume of adjustment convolutional layer are adjusted Size, the step-length etc. of product core, continue to execute above-mentioned training step.
Step 402, key point annotation results are based on, determine the state for being used to indicate the face organ that face-image is presented Instruction information.
Wherein, the specific implementation of step 402 and brought beneficial effect can refer to the step in embodiment shown in Fig. 2 Rapid 202, details are not described herein.
By carrying out key point mark to face-image using key point marking model shown in Fig. 4, can make to dynamic The annotation results that object face-image carries out key point mark are more accurate, so as to be determined more accurately out and target animal The state that is presented of the corresponding each organ of face-image.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, present disclose provides animal faces for identification One embodiment of the device of portion's state, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which specifically may be used To be applied in various electronic equipments.
As shown in figure 5, the device 500 of the state of animal face for identification provided in this embodiment includes acquiring unit 501 With determination unit 502.Wherein, acquiring unit 501 are configured to obtain the face-image of target animal, and to face-image The face organ presented carries out key point mark;Determination unit 502 is configured to be determined and used based on key point annotation results In the instruction information of the state for the face organ that instruction face-image is presented, instruction information, which includes at least one of the following:, to be used for Indicate the instruction of the instruction information of the state for the eyes that face-image is presented, the state for the mouth for being used to indicate face-image presentation Information.
In the present embodiment, for identification in the device 500 of animal face state: acquiring unit 501 and determination unit 502 Specific processing and its brought technical effect can be respectively with reference to the phase of step 201 and step 202 in Fig. 2 corresponding embodiment It speaks on somebody's behalf bright, details are not described herein.
In some optional implementations of the present embodiment, acquiring unit 501 is further configured to: by face-image It is input to key point marking model trained in advance, the face organ presented to face-image is obtained and carries out key point mark Annotation results, annotation results, which include at least one of the following:, to be used to indicate the key point annotation results of eye contour, is used to indicate mouth The key point annotation results of bar profile.
In some optional implementations of the present embodiment, training obtains key point marking model as follows: Training sample set is obtained, training sample set includes sample animal face image, facial device corresponding with sample animal face image The key point markup information of official;Using sample animal face image as input, facial device corresponding with sample animal face image The key point markup information of official is trained neural network using machine learning algorithm as desired output, obtains key point Marking model.
In some optional implementations of the present embodiment, instruction information includes being used to indicate eyes to open or closed form The instruction information of state, key point annotation results include being used to indicate the corresponding set of keypoints of eyes that face-image is presented, The corresponding set of keypoints of the eyes that face-image is presented includes upper eyelid key point, palpebra inferior key point, two canthus key Point;And determination unit 502 is further configured to: determining the pass for being used to indicate the upper eyelid for the eyes that face-image is presented First distance between key point and palpebra inferior key point;Determine that two canthus for being used to indicate the eyes that face-image is presented are crucial Second distance between point;It is selected from pre-set animal eyes discrimination threshold and animal pair indicated by face-image The eyes discrimination threshold answered, determines whether the first ratio between first distance and second distance is greater than selected eyes out and sentences Other threshold value;In response to determining that the first ratio is greater than selected eyes discrimination threshold out, the eyes that face-image is presented are determined State is eyes-open state;In response to determining that the first ratio is less than or equal to eyes discrimination threshold, determine what face-image was presented Eye state is closed-eye state.
In some optional implementations of the present embodiment, instruction information includes being used to indicate mouth opening or closed form The instruction information of state, key point annotation results include being used to indicate the corresponding set of keypoints of mouth that face-image is presented, The corresponding set of keypoints of the mouth that face-image is presented includes upper lip key point, lower lip key point, two corners of the mouths key Point;And determination unit 502 is further configured to: determining that the upper lip for being used to indicate the mouth that face-image is presented is crucial Third distance between point and lower lip key point;Determination is used to indicate between two corners of the mouths of the mouth that face-image is presented 4th distance;Mouth corresponding with animal indicated by face-image is selected from pre-set animal mouth discrimination threshold Discrimination threshold, determines whether the second ratio between third distance and the 4th distance is greater than selected mouth discrimination threshold out; In response to determining that the second ratio is greater than selected mouth discrimination threshold out, determine that the mouth states that face-image is presented are Mouth state;In response to determining that the second ratio is less than or equal to selected mouth discrimination threshold out, determine that face-image is presented Mouth states be to shut up state.
The device for the state of animal face for identification that embodiment of the disclosure provides, by the face for obtaining animal to be identified Portion's image, the face organ presented to face-image are carried out key point mark, are finally determined based on key point annotation results It is used to indicate the instruction information of the state for the face organ that face-image is presented, the face of identified animal can be made Status information is more accurate, so that user can become more apparent upon the habit for the toy that it is raised, improves interest.
Below with reference to Fig. 6, it illustrates the electronic equipment that is suitable for being used to realize embodiment of the disclosure, (example is as shown in figure 1 Terminal device) 600 structural schematic diagram.Terminal device in embodiment of the disclosure can include but is not limited to such as move electricity Words, laptop, digit broadcasting receiver, PDA (personal digital assistant), PAD (tablet computer), PMP (portable multimedia Player), the mobile terminal and such as number TV, desktop computer etc. of car-mounted terminal (such as vehicle mounted guidance terminal) etc. Fixed terminal.Terminal device shown in Fig. 6 is only an example, function to embodiment of the disclosure and should not use model Shroud carrys out any restrictions.
As shown in fig. 6, electronic equipment 600 may include processing unit (such as central processing unit, graphics processor etc.) 601, random access can be loaded into according to the program being stored in read-only memory (ROM) 602 or from storage device 608 Program in memory (RAM) 603 and execute various movements appropriate and processing.In RAM 603, it is also stored with electronic equipment Various programs and data needed for 600 operations.Processing unit 601, ROM 602 and RAM 603 pass through the phase each other of bus 604 Even.Input/output (I/O) interface 605 is also connected to bus 604.
In general, following device can connect to I/O interface 605: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph As the input unit 606 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration The output device 607 of dynamic device etc.;Storage device 608 including such as tape, hard disk etc.;And communication device 609.Communication device 609, which can permit electronic equipment 600, is wirelessly or non-wirelessly communicated with other equipment to exchange data.Although Fig. 6 shows tool There is the electronic equipment 600 of various devices, it should be understood that being not required for implementing or having all devices shown.It can be with Alternatively implement or have more or fewer devices.Each box shown in Fig. 6 can represent a device, can also root According to needing to represent multiple devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed from network by communication device 609, or from storage device 608 It is mounted, or is mounted from ROM 602.When the computer program is executed by processing unit 601, the implementation of the disclosure is executed The above-mentioned function of being limited in the method for example.
It is situated between it should be noted that the computer-readable medium of embodiment of the disclosure description can be computer-readable signal Matter or computer readable storage medium either the two any combination.Computer readable storage medium for example can be with System, device or the device of --- but being not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, or it is any more than Combination.The more specific example of computer readable storage medium can include but is not limited to: have one or more conducting wires Electrical connection, portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type are programmable Read-only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic are deposited Memory device or above-mentioned any appropriate combination.In embodiment of the disclosure, computer readable storage medium, which can be, appoints What include or the tangible medium of storage program that the program can be commanded execution system, device or device use or and its It is used in combination.And in embodiment of the disclosure, computer-readable signal media may include in a base band or as carrier wave The data-signal that a part is propagated, wherein carrying computer-readable program code.The data-signal of this propagation can be adopted With diversified forms, including but not limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal is situated between Matter can also be any computer-readable medium other than computer readable storage medium, which can be with It sends, propagate or transmits for by the use of instruction execution system, device or device or program in connection.Meter The program code for including on calculation machine readable medium can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. or above-mentioned any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned terminal device;It is also possible to individualism, and not It is fitted into the terminal device.Above-mentioned computer-readable medium carries one or more program, when said one or more When a program is executed by the electronic equipment, so that the electronic equipment: obtaining the face-image of target animal, and to face-image The face organ presented carries out key point mark;Based on key point annotation results, determine that being used to indicate face-image is presented Face organ state instruction information, instruction information include at least one of the following: is used to indicate face-image presentation eye The instruction information of the state of eyeball, be used to indicate face-image presentation mouth state instruction information.
The behaviour for executing embodiment of the disclosure can be write with one or more programming languages or combinations thereof The computer program code of work, programming language include object oriented program language-such as Java, Smalltalk, C++ further include conventional procedural programming language-such as " C " language or similar program design language Speech.Program code can be executed fully on the user computer, partly be executed on the user computer, as an independence Software package execute, part on the user computer part execute on the remote computer or completely in remote computer or It is executed on server.In situations involving remote computers, remote computer can pass through the network of any kind --- packet Include local area network (LAN) or wide area network (WAN) --- it is connected to subscriber computer, or, it may be connected to outer computer (such as It is connected using ISP by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in embodiment of the disclosure can be realized by way of software, can also be passed through The mode of hardware is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor Including a kind of processor, including acquiring unit and determination unit.Wherein, the title of these units is not constituted under certain conditions Restriction to the unit itself, for example, acquiring unit is also described as " obtaining the face-image of target animal and right The face organ that face-image is presented carries out the unit of key point mark ".
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Those skilled in the art Member it should be appreciated that embodiment of the disclosure involved in invention scope, however it is not limited to the specific combination of above-mentioned technical characteristic and At technical solution, while should also cover do not depart from foregoing invention design in the case where, by above-mentioned technical characteristic or its be equal Feature carries out any combination and other technical solutions for being formed.Such as disclosed in features described above and embodiment of the disclosure (but It is not limited to) technical characteristic with similar functions is replaced mutually and the technical solution that is formed.

Claims (12)

1. a kind of method of animal face state for identification, comprising:
The face-image of target animal is obtained, and the face organ presented to the face-image carries out key point mark;
Based on key point annotation results, the instruction letter for the state for being used to indicate the face organ that the face-image is presented is determined Breath, the instruction information include at least one of the following: the instruction letter for the state for being used to indicate the eyes that the face-image is presented Cease, be used to indicate the instruction information of the state for the mouth that the face-image is presented.
2. according to the method described in claim 1, wherein, the face organ presented to the face-image carries out crucial Point mark, comprising:
The face-image is input to key point marking model trained in advance, obtains the face presented to the face-image Portion's organ carries out the annotation results of key point mark, and the annotation results, which include at least one of the following:, is used to indicate eye contour Key point annotation results, be used to indicate the key point annotation results of mouth profile.
3. according to the method described in claim 2, wherein, training obtains the key point marking model as follows:
Training sample set is obtained, the training sample set includes sample animal face image, corresponding with sample animal face image Face organ key point markup information;
Letter is marked using sample animal face image as the key point of input, face organ corresponding with sample animal face image Breath is trained neural network using machine learning algorithm as desired output, obtains key point marking model.
4. according to the method described in claim 1, wherein, the instruction information includes being used to indicate eyes to open or closed state Instruction information, the key point annotation results include being used to indicate the corresponding key point of eyes that the face-image is presented Set, the corresponding set of keypoints of the eyes that the face-image is presented includes upper eyelid key point, palpebra inferior key point, two Canthus key point;And
It is described to be based on key point annotation results, determine the finger for being used to indicate the state for the face organ that the face-image is presented Show information, comprising:
Determine the be used to indicate between the upper eyelid key point for the eyes that the face-image is presented and palpebra inferior key point One distance;
Determine the second distance being used to indicate between two canthus key points of the eyes that the face-image is presented;
Eyes corresponding with animal indicated by the face-image are selected from pre-set animal eyes discrimination threshold Discrimination threshold, determines whether the first ratio between the first distance and the second distance is greater than selected eyes out and sentences Other threshold value;
It is greater than selected eyes discrimination threshold out in response to determination first ratio, determines what the face-image was presented Eye state is eyes-open state;
It is less than or equal to selected eyes discrimination threshold out in response to determination first ratio, determines the face-image institute The eye state of presentation is closed-eye state.
5. according to the method described in claim 1, wherein, the instruction information includes being used to indicate mouth opening or closed state Instruction information, the key point annotation results include being used to indicate the corresponding key point of mouth that the face-image is presented Set, the corresponding set of keypoints of the mouth that the face-image is presented includes upper lip key point, lower lip key point, two Corners of the mouth key point;And
It is described to be based on key point annotation results, determine the finger for being used to indicate the state for the face organ that the face-image is presented Show information, comprising:
Determine the be used to indicate between the upper lip key point for the mouth that the face-image is presented and lower lip key point Three distances;
Determine the 4th distance being used to indicate between two corners of the mouths of the mouth that the face-image is presented;
Mouth corresponding with animal indicated by the face-image is selected from pre-set animal mouth discrimination threshold Discrimination threshold, determines whether the second ratio between the third distance and the 4th distance is greater than selected mouth out and sentences Other threshold value;
It is greater than selected mouth discrimination threshold out in response to determination second ratio, determines what the face-image was presented Mouth states are state of opening one's mouth;
It is less than or equal to selected mouth discrimination threshold out in response to determination second ratio, determines the face-image institute The mouth states of presentation are state of shutting up.
6. a kind of device of animal face state for identification, comprising:
Acquiring unit is configured to obtain the face-image of target animal, and to the facial device that the face-image is presented Official carries out key point mark;
Determination unit is configured to based on key point annotation results, and determination is used to indicate the face that the face-image is presented The instruction information of the state of organ, the instruction information, which includes at least one of the following:, is used to indicate what the face-image was presented The instruction information of the state of eyes, be used to indicate the mouth that the face-image is presented state instruction information.
7. device according to claim 6, wherein the acquiring unit is further configured to:
The face-image is input to key point marking model trained in advance, obtains the face presented to the face-image Portion's organ carries out the annotation results of key point mark, and the annotation results, which include at least one of the following:, is used to indicate eye contour Key point annotation results, be used to indicate the key point annotation results of mouth profile.
8. device according to claim 7, wherein training obtains the key point marking model as follows:
Training sample set is obtained, the training sample set includes sample animal face image, corresponding with sample animal face image Face organ key point markup information;
Letter is marked using sample animal face image as the key point of input, face organ corresponding with sample animal face image Breath is trained neural network using machine learning algorithm as desired output, obtains key point marking model.
9. device according to claim 6, wherein the instruction information includes being used to indicate eyes to open or closed state Instruction information, the key point annotation results include being used to indicate the corresponding key point of eyes that the face-image is presented Set, the corresponding set of keypoints of the eyes that the face-image is presented includes upper eyelid key point, palpebra inferior key point, two Canthus key point;And
The determination unit is further configured to:
Determine the be used to indicate between the upper eyelid key point for the eyes that the face-image is presented and palpebra inferior key point One distance;
Determine the second distance being used to indicate between two canthus key points of the eyes that the face-image is presented;
Eyes corresponding with animal indicated by the face-image are selected from pre-set animal eyes discrimination threshold Discrimination threshold, determines whether the first ratio between the first distance and the second distance is greater than selected eyes out and sentences Other threshold value;
It is greater than selected eyes discrimination threshold out in response to determination first ratio, determines what the face-image was presented Eye state is eyes-open state;
It is less than or equal to selected eyes discrimination threshold out in response to determination first ratio, determines the face-image institute The eye state of presentation is closed-eye state.
10. device according to claim 6, wherein the instruction information includes being used to indicate mouth opening or closed form The instruction information of state, the key point annotation results include being used to indicate the corresponding key of mouth that the face-image is presented Point set, the corresponding set of keypoints of the mouth that the face-image is presented include upper lip key point, lower lip key point, Two corners of the mouth key points;And
The determination unit is further configured to:
Determine the be used to indicate between the upper lip key point for the mouth that the face-image is presented and lower lip key point Three distances;
Determine the 4th distance being used to indicate between two corners of the mouths of the mouth that the face-image is presented;
Mouth corresponding with animal indicated by the face-image is selected from pre-set animal mouth discrimination threshold Discrimination threshold, determines whether the second ratio between the third distance and the 4th distance is greater than selected mouth out and sentences Other threshold value;
It is greater than selected mouth discrimination threshold out in response to determination second ratio, determines what the face-image was presented Mouth states are state of opening one's mouth;
It is less than or equal to selected mouth discrimination threshold out in response to determination second ratio, determines the face-image institute The mouth states of presentation are state of shutting up.
11. a kind of electronic equipment, comprising:
One or more processors;
Storage device is stored thereon with one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processors are real Now such as method as claimed in any one of claims 1 to 5.
12. a kind of computer-readable medium, is stored thereon with computer program, wherein the realization when program is executed by processor Such as method as claimed in any one of claims 1 to 5.
CN201910304749.8A 2019-04-16 2019-04-16 The method and apparatus of animal face state for identification Pending CN110008930A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910304749.8A CN110008930A (en) 2019-04-16 2019-04-16 The method and apparatus of animal face state for identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910304749.8A CN110008930A (en) 2019-04-16 2019-04-16 The method and apparatus of animal face state for identification

Publications (1)

Publication Number Publication Date
CN110008930A true CN110008930A (en) 2019-07-12

Family

ID=67172255

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910304749.8A Pending CN110008930A (en) 2019-04-16 2019-04-16 The method and apparatus of animal face state for identification

Country Status (1)

Country Link
CN (1) CN110008930A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111126347A (en) * 2020-01-06 2020-05-08 腾讯科技(深圳)有限公司 Human eye state recognition method and device, terminal and readable storage medium
CN112633305A (en) * 2019-09-24 2021-04-09 深圳云天励飞技术有限公司 Key point marking method and related equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080037836A1 (en) * 2006-08-09 2008-02-14 Arcsoft, Inc. Method for driving virtual facial expressions by automatically detecting facial expressions of a face image
CN108460345A (en) * 2018-02-08 2018-08-28 电子科技大学 A kind of facial fatigue detection method based on face key point location
CN108614999A (en) * 2018-04-16 2018-10-02 贵州大学 Eyes based on deep learning open closed state detection method
CN108875524A (en) * 2018-01-02 2018-11-23 北京旷视科技有限公司 Gaze estimation method, device, system and storage medium
CN108960065A (en) * 2018-06-01 2018-12-07 浙江零跑科技有限公司 A kind of driving behavior detection method of view-based access control model
CN109271875A (en) * 2018-08-24 2019-01-25 中国人民解放军火箭军工程大学 A kind of fatigue detection method based on supercilium and eye key point information
CN109376684A (en) * 2018-11-13 2019-02-22 广州市百果园信息技术有限公司 A kind of face critical point detection method, apparatus, computer equipment and storage medium
CN109447025A (en) * 2018-11-08 2019-03-08 北京旷视科技有限公司 Fatigue detection method, device, system and computer readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080037836A1 (en) * 2006-08-09 2008-02-14 Arcsoft, Inc. Method for driving virtual facial expressions by automatically detecting facial expressions of a face image
CN108875524A (en) * 2018-01-02 2018-11-23 北京旷视科技有限公司 Gaze estimation method, device, system and storage medium
CN108460345A (en) * 2018-02-08 2018-08-28 电子科技大学 A kind of facial fatigue detection method based on face key point location
CN108614999A (en) * 2018-04-16 2018-10-02 贵州大学 Eyes based on deep learning open closed state detection method
CN108960065A (en) * 2018-06-01 2018-12-07 浙江零跑科技有限公司 A kind of driving behavior detection method of view-based access control model
CN109271875A (en) * 2018-08-24 2019-01-25 中国人民解放军火箭军工程大学 A kind of fatigue detection method based on supercilium and eye key point information
CN109447025A (en) * 2018-11-08 2019-03-08 北京旷视科技有限公司 Fatigue detection method, device, system and computer readable storage medium
CN109376684A (en) * 2018-11-13 2019-02-22 广州市百果园信息技术有限公司 A kind of face critical point detection method, apparatus, computer equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633305A (en) * 2019-09-24 2021-04-09 深圳云天励飞技术有限公司 Key point marking method and related equipment
CN111126347A (en) * 2020-01-06 2020-05-08 腾讯科技(深圳)有限公司 Human eye state recognition method and device, terminal and readable storage medium
CN111126347B (en) * 2020-01-06 2024-02-20 腾讯科技(深圳)有限公司 Human eye state identification method, device, terminal and readable storage medium

Similar Documents

Publication Publication Date Title
CN108446387A (en) Method and apparatus for updating face registration library
CN109858445A (en) Method and apparatus for generating model
CN108898185A (en) Method and apparatus for generating image recognition model
CN110288049A (en) Method and apparatus for generating image recognition model
CN109086719A (en) Method and apparatus for output data
CN108898186A (en) Method and apparatus for extracting image
CN110110811A (en) Method and apparatus for training pattern, the method and apparatus for predictive information
CN108985257A (en) Method and apparatus for generating information
CN109902659A (en) Method and apparatus for handling human body image
CN109993150A (en) The method and apparatus at age for identification
CN109829432A (en) Method and apparatus for generating information
CN110009059A (en) Method and apparatus for generating model
CN109815365A (en) Method and apparatus for handling video
CN108345387A (en) Method and apparatus for output information
CN109919244A (en) Method and apparatus for generating scene Recognition model
CN109299477A (en) Method and apparatus for generating text header
CN109981787A (en) Method and apparatus for showing information
CN108491823A (en) Method and apparatus for generating eye recognition model
CN109214501A (en) The method and apparatus of information for identification
CN109241934A (en) Method and apparatus for generating information
CN109947989A (en) Method and apparatus for handling video
CN108509921A (en) Method and apparatus for generating information
CN108491808A (en) Method and device for obtaining information
CN110084317A (en) The method and apparatus of image for identification
CN109117758A (en) Method and apparatus for generating information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination