WO2019061660A1 - Procédé de recrutement, dispositif électronique, et support de stockage lisible - Google Patents

Procédé de recrutement, dispositif électronique, et support de stockage lisible Download PDF

Info

Publication number
WO2019061660A1
WO2019061660A1 PCT/CN2017/108764 CN2017108764W WO2019061660A1 WO 2019061660 A1 WO2019061660 A1 WO 2019061660A1 CN 2017108764 W CN2017108764 W CN 2017108764W WO 2019061660 A1 WO2019061660 A1 WO 2019061660A1
Authority
WO
WIPO (PCT)
Prior art keywords
organ
performance level
training
face
performance
Prior art date
Application number
PCT/CN2017/108764
Other languages
English (en)
Chinese (zh)
Inventor
王健宗
王晨羽
马进
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019061660A1 publication Critical patent/WO2019061660A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/105Human resources
    • G06Q10/1053Employment or hiring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Definitions

  • the present application relates to the field of computer technologies, and in particular, to a recruitment method, an electronic device, and a readable storage medium.
  • the purpose of the present application is to provide a recruitment method, an electronic device, and a readable storage medium, which are intended to assist recruitment by identifying the face of a candidate and predicting corresponding performance.
  • a first aspect of the present application provides an electronic device, where the electronic device includes a memory and a processor, where the memory stores a recruitment system executable on the processor, and the recruitment system is The following steps are implemented when the processor is executed:
  • the face photo is identified by using a predetermined recognition model, and the performance level corresponding to the face photo is obtained, so that the recruiter can recruit the candidate according to the obtained performance level;
  • the predetermined The recognition model is a deep convolutional neural network model obtained by training a preset number of face sample images marked with different performance levels in advance.
  • a second aspect of the present application provides a recruitment method, the recruitment method comprising:
  • Step 1 Obtain a photo of the face of the candidate
  • Step 2 Identifying the face photo by using a predetermined recognition model, and obtaining a performance level corresponding to the face photo, for the recruiter to refer to the obtained performance level to recruit the candidate; wherein, the advance
  • the determined recognition model is a deep convolutional neural network model obtained by training a preset number of face sample pictures marked with different performance levels in advance.
  • a third aspect of the present application provides a computer readable storage medium storing a recruiting system, the recruiting system being executable by at least one processor to cause the at least one processor to perform the following steps:
  • the recognition model is a deep convolutional neural network model obtained by training a preset number of face sample images marked with different performance levels in advance.
  • the recruitment method, system and readable storage medium proposed by the present application identify the face photos of the employed personnel by using a deep convolutional neural network model trained based on a preset number of face sample images marked with different performance levels. According to the recognition result, the performance level corresponding to the candidate is predicted. Through deep learning, find out the relationship between the face and the performance level, and determine the corresponding performance level according to the face of the candidate when recruiting, and can use the predicted performance level of the candidate as the reference factor for recruitment. To provide a new recruitment reference factor to assist recruitment and improve the quality of recruitment.
  • FIG. 1 is a schematic diagram of an operating environment of a preferred embodiment of the recruitment system 10 of the present application.
  • FIG. 2 is a schematic flowchart of an embodiment of a recruitment method of the present application.
  • first, second and the like in the present application are for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated. .
  • features defining “first” and “second” may include at least one of the features, either explicitly or implicitly.
  • the technical solutions between the various embodiments may be combined with each other, but must be based on the realization of those skilled in the art, and when the combination of the technical solutions is contradictory or impossible to implement, it should be considered that the combination of the technical solutions does not exist. Nor is it within the scope of protection required by this application.
  • FIG. 1 is a schematic diagram of an operating environment of a preferred embodiment of the recruitment system 10 of the present application.
  • the recruitment system 10 is installed and operated in the electronic device 1.
  • the electronic device 1 may include, but is not limited to, a memory 11, a processor 12, and a display 13.
  • Figure 1 shows only the electronic device 1 with components 11-13, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
  • the memory 11 comprises at least one type of readable storage medium, which in some embodiments may be an internal storage unit of the electronic device 1, such as a hard disk or memory of the electronic device 1.
  • the memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (SMC), and a secure digital device. (Secure Digital, SD) card, flash card, etc.
  • the memory 11 may also include both an internal storage unit of the electronic device 1 and an external storage device.
  • the memory 11 is configured to store application software installed on the electronic device 1 and various types of data, such as program codes of the recruiting system 10 and the like.
  • the memory 11 can also be used to temporarily store data that has been output or is about to be output.
  • the processor 12 in some embodiments, may be a central processing unit (CPU), a microprocessor or other data processing chip for running program code or processing data stored in the memory 11, for example
  • the recruitment system 10 and the like are executed.
  • the display 13 in some embodiments may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch sensor, or the like.
  • the display 13 is used to display information processed in the electronic device 1 and a user interface for displaying visualization, such as a face photo of a candidate, a performance level corresponding to a face photo, and the like.
  • the components 11-13 of the electronic device 1 communicate with one another via a system bus.
  • the recruiting system 10 includes at least one computer readable instructions stored in the memory 11, the at least one computer readable instructions being executable by the processor 12 to implement various embodiments of the present application.
  • step S1 a photo of the face of the candidate is obtained.
  • the recruiting system 10 receives a face photo of the applicant sent by the user, for example, receiving a photo of the face of the applicant sent by the user through a mobile phone, a tablet computer, a self-service terminal device, and the like, such as receiving the user on the mobile phone or tablet.
  • the face image of the applicant sent by the pre-installed client in the terminal such as the self-service terminal device, or the face image of the candidate sent by the user on the browser system in the terminal such as the mobile phone, the tablet computer, the self-service terminal device, and the like .
  • the face image of the applicant may be various image format types such as JPEG, PNG, and GIF, and is not limited herein.
  • the face photo may be a face photographing frame of a candidate who is first provided by the system, and the applicant takes a picture in the photo frame of the applicant's face and uploads the face of the candidate.
  • the photo is sent to the system so that the photo of the candidate's face received by the system is a uniform specification.
  • the face photo may also be an image processing of the received face photo after receiving the face photo sent by the applicant, for example, the received face photo may be cropped to be unified.
  • the processing of the pixel size and the like is advantageous for the subsequent recognition process of the face image of the corresponding recruiter.
  • step S2 the face image is identified by using a predetermined recognition model, and the performance level corresponding to the face photo is obtained, so that the recruiter can recruit the candidate by referring to the obtained performance level;
  • the determined recognition model is a deep convolutional neural network model obtained by training a preset number of face sample pictures marked with different performance levels in advance.
  • the pre-trained recognition model is used to identify the face image of the employed person, and the recognition result of the face image of the applicant in the recognition model is identified.
  • the recognition model can be continuously trained, learned, verified, optimized, etc. by manually identifying a plurality of preset number of face sample images marked with different performance levels, so as to train them to accurately identify the corresponding scores of different performance levels.
  • Model may employ a Convolutional Neural Network (CNN) model such as AlexNet, caffeNet or ResNet, and the like.
  • CNN Convolutional Neural Network
  • the different performance levels marked can be five levels of “A, B, C, D, E”, or “Excellent, Good, Medium, and Poor”, which can be extracted from the preset employee performance appraisal database.
  • the face sample picture, the preset employee performance appraisal database contains the performance appraisal level of each employee in the historical performance appraisal record in the actual work and the corresponding employee face image, which can be extracted from the preset employee performance appraisal database.
  • the employee face image of the performance level employee is used as the face sample picture, and the performance level corresponding to the employee is marked, and the recognition model can be obtained based on the face sample picture and the performance level of the annotation.
  • the trained recognition model can be used to identify the face image of the employed person, and the performance level corresponding to the face image of the candidate is identified, such as “A, B, C, D , E” or "Excellent, Good, Medium, Poor” one of the performance levels.
  • the training process of the predetermined recognition model is as follows:
  • A. Prepare a corresponding preset number of face sample images for each preset performance level (such as "A, B, C, D, E” or "Y, L, ZHONG, D", etc.) for each sample.
  • the picture is labeled with the corresponding performance level.
  • the employee face image of each employee of different performance levels may be extracted from the preset employee performance appraisal database as a face sample picture, and the performance level corresponding to the employee is marked.
  • the model training is performed by performing image preprocessing such as cropping and flipping on each face sample image, so that each face sample image is processed into a standard face sample image with uniform specifications and the face is in the middle, which can effectively improve the model. The authenticity and accuracy of the training.
  • the verification set to verify the accuracy of the training recognition model if the accuracy rate is greater than or equal to the preset accuracy rate (for example, 95%), the training ends, or if the accuracy rate is less than the preset accuracy rate, increase The number of face sample pictures corresponding to each performance level and the above steps B, C, D, and E are re-executed until the accuracy of the trained recognition model is greater than or equal to the preset accuracy rate.
  • the preset accuracy rate for example, 95%
  • the recruiter can refer to the obtained performance level to recruit the candidate. For example, different scores are set according to the level of performance, and the higher the performance level of the applicants identified by the trained recognition model, the higher the recruitment score, or the performance level is excellent or A can be added to the recruitment score. Points to assist recruitment.
  • the present embodiment identifies a face image of a hired person by using a deep convolutional neural network model obtained by training a preset number of face sample pictures marked with different performance levels, and according to the recognition result. Predict the performance level of the candidate. Through deep learning, find out the relationship between the face and the performance level, and determine the corresponding performance level according to the face of the candidate when recruiting, and can use the predicted performance level of the candidate as the reference factor for recruitment. To provide a new recruitment reference factor to assist recruitment and improve the quality of recruitment.
  • the method further includes:
  • the employee face images of employees with different performance levels can be extracted from the preset employee performance appraisal database.
  • the facial features such as eyebrows, eyes, ears, nose, and/or mouth in each employee's face photo may be extracted, and the number of preset classification features of each organ in the photo of the facial features corresponding to different performance levels may be counted.
  • the performance classification characteristics of each organ corresponding to different performance levels are determined. That is, the characteristics of each organ of the facial features in the employee's face photos of employees of different performance levels are statistically summarized.
  • each organ of the facial features include, but are not limited to, the curvature of the upper lip and the inner corner of the two eyes. The distance, the angle formed by the connection of the tip of the nose with the two corners of the mouth, the shape of the eyebrows, the type of the ear, and the like. According to these reference features, some statistical analysis can be done. For example, among the reference features corresponding to each organ in the photos of the facial features corresponding to different performance levels, which of the feature values in the same class have a smaller variance, between different classes. The variance is large. It is also possible to put certain category reference features (such as the type of nose) into the second classifier for training. If a higher accuracy is obtained, the reference feature has a value as a classification reference.
  • category reference features such as the type of nose
  • the preset classification feature is an example of the distance between the two inner corners of the eye.
  • the employee's face photo corresponding to each performance level is counted as the eye of the facial features.
  • the number of distributions of the distance parameters between the two inner corners of the eye such as the number of distributions of the distance parameters between the two inner corners of the eyes in each photo of the number of employee face photos corresponding to the statistical performance level A, may select a number of employee faces
  • the distance parameter between the two inner corners of the upper eye in the photo (but a specific parameter value, or a smaller parameter value range such as 50mm-51mm) is the most distributed eye performance classification feature corresponding to the performance level A. According to this, the eye performance classification characteristics corresponding to each performance level B, C, D, and E can be obtained.
  • the identification models corresponding to the respective organs may be separately trained for each organ, for example, in an optional implementation manner, the training process of the recognition model corresponding to each organ as follows:
  • A. Prepare a corresponding preset number of images of each organ sample in the five senses for each performance level (such as "A, B, C, D, E” or "Excellent, Good, Medium, Poor", etc.) for each organ
  • the sample picture marks a corresponding performance level; wherein each organ sample picture is a picture containing each organ performance classification feature corresponding to each different performance level.
  • a picture of the eye performance classification feature corresponding to each performance level may be prepared.
  • the eye performance classification feature corresponding to the performance level A is “the distance parameter between the two inner corners is between 50 mm and 51 mm”, then A sample picture with a preset number of features corresponding to the "distance between two inner corners of the eye between 50mm and 51mm" can be prepared for performance level A.
  • the verification set to verify the accuracy of the trained recognition model if the accuracy of the recognition model corresponding to all organs is greater than or equal to the preset accuracy rate (for example, 95%), the training ends, or if there is an organ corresponding If the accuracy of the recognition model is less than the preset accuracy rate, the number of organ sample images corresponding to different performance levels of the organ is increased and steps B, C, and D are re-executed. That is to say, the accuracy of the recognition model corresponding to all organs can reach a certain requirement (for example, 95%), and the model training process can be completed.
  • the preset accuracy rate for example, 95%
  • the trained recognition model can be used to identify the face image of the employed person. First, extract the facial features in the face image of the applicant, and then use the identification model corresponding to each of the trained organs to identify the performance level corresponding to each organ in the extracted facial features, and then identify each of the facial features of the applicant. The performance level corresponding to the organ. Finally, according to the preset different organ weights, and the corresponding performance levels of the identified organs, the performance level corresponding to the face image of the candidate is calculated. Specifically, in an optional implementation manner, the formula for calculating the performance level M corresponding to the face photo is:
  • a is the weighting coefficient of the performance level M1 corresponding to the eyebrow
  • b is the weighting coefficient of the performance level M2 corresponding to the eye
  • c is the weighting coefficient of the performance level M3 corresponding to the ear
  • d is the weighting coefficient of the performance level M4 corresponding to the nose
  • e is the weighting factor of the performance level M5 corresponding to the mouth.
  • the weight coefficients a, b, c, d, e of different organs may be preset, such as the weight coefficient a of the eyebrows is 0.1, the weight coefficient b of the eye is 0.3, the weight coefficient c of the ear is 0.3, and the weight coefficient of the nose d is 0.1 and the weight coefficient d of the mouth is 0.2.
  • the corresponding indicator values are assigned to different performance levels. For example, the performance level corresponds to an indicator value of 5, the performance level “good” corresponds to an indicator value of 4, and the performance level “middle” corresponds to an indicator value of 2.
  • the performance level “poor” corresponds to an indicator value of 1.
  • the performance level of the eyebrows in the facial features of the candidate is “good”, the performance level of the eye is “good”, and the performance level of the ear is “medium”.
  • FIG. 2 is a schematic flowchart of an embodiment of a recruitment method according to the present application.
  • the recruitment method includes the following steps:
  • step S10 a photo of the face of the candidate is obtained.
  • the recruiting system receives the face photo of the applicant sent by the user, for example, receiving a photo of the face of the applicant sent by the user through a mobile phone, a tablet computer, a self-service terminal device, and the like, such as receiving the user on the mobile phone, the tablet computer, A photo of the candidate's face sent by the pre-installed client in the terminal such as the self-service terminal device, or a photo of the candidate's face sent by the user on the browser system in the terminal such as a mobile phone, a tablet computer, or a self-service terminal device.
  • the face image of the applicant may be various image format types such as JPEG, PNG, and GIF, and is not limited herein.
  • the face photo may be a face photographing frame of a candidate who is first provided by the system, and the applicant takes a picture in the photo frame of the applicant's face and uploads the face of the candidate.
  • the photo is sent to the system so that the photo of the candidate's face received by the system is a uniform specification.
  • the face photo may also be an image processing of the received face photo after receiving the face photo sent by the applicant, for example, the received face photo may be cropped to a uniform pixel size, etc., to facilitate subsequent The identification process of the face image of the applicant is more precise.
  • Step S20 identifying the face photo by using a predetermined recognition model, Obtaining a performance level corresponding to the face photo for the recruiter to refer to the obtained performance level to recruit the candidate; wherein the predetermined recognition model is a pre-determined number of preset people with different performance levels A deep convolutional neural network model obtained by training a face sample picture.
  • the pre-trained recognition model is used to identify the face image of the employed person, and the recognition result of the face image of the applicant in the recognition model is identified.
  • the recognition model can be continuously trained, learned, verified, optimized, etc. by manually identifying a plurality of preset number of face sample images marked with different performance levels, so as to train them to accurately identify the corresponding scores of different performance levels.
  • Model may employ a Convolutional Neural Network (CNN) model such as AlexNet, caffeNet or ResNet, and the like.
  • CNN Convolutional Neural Network
  • the different performance levels marked can be five levels of “A, B, C, D, E”, or “Excellent, Good, Medium, and Poor”, which can be extracted from the preset employee performance appraisal database.
  • the face sample picture, the preset employee performance appraisal database contains the performance appraisal level of each employee in the historical performance appraisal record in the actual work and the corresponding employee face image, which can be extracted from the preset employee performance appraisal database.
  • the employee face image of the performance level employee is used as the face sample picture, and the performance level corresponding to the employee is marked, and the recognition model can be obtained based on the face sample picture and the performance level of the annotation.
  • the trained recognition model can be used to identify the face image of the employed person, and the performance level corresponding to the face image of the candidate is identified, such as “A, B, C, D , E” or "Excellent, Good, Medium, Poor” one of the performance levels.
  • the training process of the predetermined recognition model is as follows:
  • A. Prepare a corresponding preset number of face sample images for each preset performance level (such as "A, B, C, D, E” or "Y, L, ZHONG, D", etc.) for each sample.
  • the picture is labeled with the corresponding performance level.
  • the employee face image of each employee of different performance levels may be extracted from the preset employee performance appraisal database as a face sample picture, and the performance level corresponding to the employee is marked.
  • the model training is performed by performing image preprocessing such as cropping and flipping on each face sample image, so that each face sample image is processed into a standard face sample image with uniform specifications and the face is in the middle, which can effectively improve the model. The authenticity and accuracy of the training.
  • the recruiter can refer to the obtained performance level to recruit the candidate. For example, different scores are set according to the level of performance, and the higher the performance level of the applicants identified by the trained recognition model, the higher the recruitment score, or the performance level is excellent or A can be added to the recruitment score. Points to assist recruitment.
  • the present embodiment identifies a face image of a hired person by using a deep convolutional neural network model obtained by training a preset number of face sample pictures marked with different performance levels, and according to the recognition result. Predict the performance level of the candidate. Through deep learning, find out the relationship between the face and the performance level, and determine the corresponding performance level according to the face of the candidate when recruiting, and can use the predicted performance level of the candidate as the reference factor for recruitment. To provide a new recruitment reference factor to assist recruitment and improve the quality of recruitment.
  • the method further includes:
  • the employee face images of employees with different performance levels can be extracted from the preset employee performance appraisal database.
  • the facial features such as eyebrows, eyes, ears, nose, and/or mouth in each employee's face photo may be extracted, and the number of preset classification features of each organ in the photo of the facial features corresponding to different performance levels may be counted.
  • the performance classification characteristics of each organ corresponding to different performance levels are determined. That is, the characteristics of each organ of the facial features in the employee's face photos of employees of different performance levels are statistically summarized.
  • each organ of the facial features include, but are not limited to, the curvature of the upper lip and the inner corner of the two eyes. The distance, the angle formed by the connection of the tip of the nose with the two corners of the mouth, the shape of the eyebrows, the type of the ear, and the like. According to these reference features, some statistical analysis can be done. For example, among the reference features corresponding to each organ in the photos of the facial features corresponding to different performance levels, which of the feature values in the same class have a smaller variance, between different classes. The variance is large. It is also possible to put certain category reference features (such as the type of nose) into the second classifier for training. If a higher accuracy is obtained, the reference feature has a value as a classification reference.
  • category reference features such as the type of nose
  • the preset classification feature is an example of the distance between the two inner corners of the eye.
  • the employee's face photo corresponding to each performance level is counted as the eye of the facial features.
  • the number of distribution parameters of the distance between the two inner corners of the eye such as the number of photos of the employee's face photo corresponding to the statistical performance level A
  • the number of distance parameters between the two inner corners of the eye in the film, the distance parameter between the two inner corners of the upper eye in the number of employees' faces may be selected (but a specific parameter value, or a smaller one)
  • the parameter value range is 50mm-51mm.
  • the eye distribution performance feature corresponding to the performance level A is the most distributed. According to this, the eye performance classification characteristics corresponding to each performance level B, C, D, E can be obtained.
  • the identification models corresponding to the respective organs may be separately trained for each organ, for example, in an optional implementation manner, the training process of the recognition model corresponding to each organ as follows:
  • A. Prepare a corresponding preset number of images of each organ sample in the five senses for each performance level (such as "A, B, C, D, E” or "Excellent, Good, Medium, Poor", etc.) for each organ
  • the sample picture marks a corresponding performance level; wherein each organ sample picture is a picture containing each organ performance classification feature corresponding to each different performance level.
  • a picture of the eye performance classification feature corresponding to each performance level may be prepared.
  • the eye performance classification feature corresponding to the performance level A is “the distance parameter between the two inner corners is between 50 mm and 51 mm”, then A sample picture with a preset number of features corresponding to the "distance between two inner corners of the eye between 50mm and 51mm" can be prepared for performance level A.
  • the verification set to verify the accuracy of the trained recognition model if the accuracy of the recognition model corresponding to all organs is greater than or equal to the preset accuracy rate (for example, 95%), the training ends, or if there is an organ corresponding If the accuracy of the recognition model is less than the preset accuracy rate, the number of organ sample images corresponding to different performance levels of the organ is increased and steps B, C, and D are re-executed. That is to say, the accuracy of the recognition model corresponding to all organs can reach a certain requirement (for example, 95%), and the model training process can be completed.
  • the preset accuracy rate for example, 95%
  • the trained recognition model can be used to identify the face image of the employed person. First, extract the facial features in the face image of the applicant, and then use the identification model corresponding to each of the trained organs to identify the performance level corresponding to each organ in the extracted facial features, and then identify each of the facial features of the applicant. The performance level corresponding to the organ. Finally, according to the preset different organ weights, and the corresponding performance levels of the identified organs, the performance level corresponding to the face image of the candidate is calculated. Specifically, in an optional implementation manner, the formula for calculating the performance level M corresponding to the face photo is:
  • a is the weighting coefficient of the performance level M1 corresponding to the eyebrow
  • b is the weighting coefficient of the performance level M2 corresponding to the eye
  • c is the weighting coefficient of the performance level M3 corresponding to the ear
  • d is the weighting coefficient of the performance level M4 corresponding to the nose
  • e is the weighting factor of the performance level M5 corresponding to the mouth.
  • the weight coefficients a, b, c, d, e of different organs may be preset, such as the weight coefficient a of the eyebrows is 0.1, the weight coefficient b of the eye is 0.3, the weight coefficient c of the ear is 0.3, and the weight coefficient of the nose d is 0.1 and the weight coefficient d of the mouth is 0.2.
  • the corresponding indicator values are assigned to different performance levels. For example, the performance level corresponds to an indicator value of 5, the performance level “good” corresponds to an indicator value of 4, and the performance level “middle” corresponds to an indicator value of 2.
  • the performance level “poor” corresponds to an indicator value of 1.
  • the performance level of the eyebrows in the facial features of the candidate is “good”, the performance level of the eye is “good”, and the performance level of the ear is “medium”.
  • the present application also provides a computer readable storage medium storing a recruitment system, the recruitment system being executable by at least one processor to cause the at least one processor to perform the implementation as described above
  • the steps of the recruitment method in the example, the specific implementation processes of the steps S10, S20, etc. of the recruitment method are as described above, and are not described herein again.
  • the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and can also be implemented by hardware, but in many cases, the former is A better implementation.
  • the technical solution of the present application which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the methods described in various embodiments of the present application.

Abstract

L'invention concerne un procédé de recrutement, un dispositif électronique et un support de stockage lisible. Le procédé consiste à : obtenir des photos de visage de candidats ; effectuer une reconnaissance sur les photos de visage grâce à un modèle de reconnaissance prédéterminé pour obtenir des niveaux de performance correspondant aux photos de visage, de sorte qu'un recruteur recrute les candidats en se référant aux niveaux de performance obtenus, le modèle de reconnaissance prédéterminé étant un modèle à réseau neuronal convolutif profond obtenu à l'avance par entraînement avec un nombre prédéfini d'images de visage échantillons associées à différents niveaux de performance. Le procédé assiste le recrutement en fournissant un nouveau facteur de référence de recrutement, de sorte que la qualité de recrutement soit améliorée.
PCT/CN2017/108764 2017-09-30 2017-10-31 Procédé de recrutement, dispositif électronique, et support de stockage lisible WO2019061660A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710916039.1 2017-09-30
CN201710916039.1A CN107784482A (zh) 2017-09-30 2017-09-30 招聘方法、电子装置及可读存储介质

Publications (1)

Publication Number Publication Date
WO2019061660A1 true WO2019061660A1 (fr) 2019-04-04

Family

ID=61433681

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/108764 WO2019061660A1 (fr) 2017-09-30 2017-10-31 Procédé de recrutement, dispositif électronique, et support de stockage lisible

Country Status (2)

Country Link
CN (1) CN107784482A (fr)
WO (1) WO2019061660A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875590A (zh) * 2018-05-25 2018-11-23 平安科技(深圳)有限公司 Bmi预测方法、装置、计算机设备和存储介质
CN109308565B (zh) * 2018-08-01 2024-03-19 平安科技(深圳)有限公司 人群绩效等级识别方法、装置、存储介质及计算机设备
CN111680597B (zh) * 2020-05-29 2023-09-01 北京百度网讯科技有限公司 人脸识别模型处理方法、装置、设备和存储介质
CN113240390A (zh) * 2021-05-14 2021-08-10 广州红海云计算股份有限公司 一种基于互联网的智能人力资源管理方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080187186A1 (en) * 2007-02-02 2008-08-07 Sony Corporation Image processing apparatus, image processing method and computer program
CN105320945A (zh) * 2015-10-30 2016-02-10 小米科技有限责任公司 图像分类的方法及装置
CN107045618A (zh) * 2016-02-05 2017-08-15 北京陌上花科技有限公司 一种人脸表情识别方法及装置
CN107169455A (zh) * 2017-05-16 2017-09-15 中山大学 基于深度局部特征的人脸属性识别方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103065276A (zh) * 2012-05-17 2013-04-24 刘学勇 特征量化法
CN104504376A (zh) * 2014-12-22 2015-04-08 厦门美图之家科技有限公司 一种人脸图像的年龄分类方法和系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080187186A1 (en) * 2007-02-02 2008-08-07 Sony Corporation Image processing apparatus, image processing method and computer program
CN105320945A (zh) * 2015-10-30 2016-02-10 小米科技有限责任公司 图像分类的方法及装置
CN107045618A (zh) * 2016-02-05 2017-08-15 北京陌上花科技有限公司 一种人脸表情识别方法及装置
CN107169455A (zh) * 2017-05-16 2017-09-15 中山大学 基于深度局部特征的人脸属性识别方法

Also Published As

Publication number Publication date
CN107784482A (zh) 2018-03-09

Similar Documents

Publication Publication Date Title
US11259718B1 (en) Systems and methods for automated body mass index calculation to determine value
WO2019109526A1 (fr) Procédé et dispositif de reconnaissance de l'âge de l'image d'un visage, et support de stockage
US10509985B2 (en) Method and apparatus for security inspection
WO2019120115A1 (fr) Procédé et appareil de reconnaissance faciale et dispositif informatique
WO2019238063A1 (fr) Procédé et appareil de détection et d'analyse de texte, et dispositif
US10169646B2 (en) Face authentication to mitigate spoofing
US9639769B2 (en) Liveness detection
WO2019174130A1 (fr) Procédé de reconnaissance de facture, serveur et support de stockage lisible par ordinateur
WO2018028546A1 (fr) Procédé de positionnement de point clé, terminal et support de stockage informatique
WO2017016240A1 (fr) Procédé d'identification de numéro de série de billet de banque
WO2019061660A1 (fr) Procédé de recrutement, dispositif électronique, et support de stockage lisible
WO2019062080A1 (fr) Procédé de reconnaissance d'identité, dispositif électronique et support d'informations lisible par ordinateur
US10748217B1 (en) Systems and methods for automated body mass index calculation
CN108717663B (zh) 基于微表情的面签欺诈判断方法、装置、设备及介质
WO2019071660A1 (fr) Procédé d'identification d'informations de facture, dispositif électronique, et support de stockage lisible
CN108090830B (zh) 一种基于面部画像的信贷风险评级方法和装置
WO2019174131A1 (fr) Procédé d'authentification d'identité, serveur, et support de stockage lisible par ordinateur
WO2019071738A1 (fr) Procédé et appareil d'authentification d'identité de candidat d'examen, support de stockage lisible et dispositif terminal
CN107679475B (zh) 门店监控评价方法、装置及存储介质
WO2012132418A1 (fr) Dispositif d'estimation de caractéristique
WO2018072028A1 (fr) Authentification faciale permettant d'atténuer la mystification
CN111785384A (zh) 基于人工智能的异常数据识别方法及相关设备
WO2021139316A1 (fr) Procédé et appareil d'établissement d'un modèle de reconnaissance d'expression, et dispositif informatique et support d'enregistrement
CN100371945C (zh) 一种计算机辅助书法作品真伪鉴别方法
CN113313114B (zh) 证件信息获取方法、装置、设备以及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17926739

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23/09/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17926739

Country of ref document: EP

Kind code of ref document: A1