WO2019061660A1 - Recruitment method, electronic device, and readable storage medium - Google Patents

Recruitment method, electronic device, and readable storage medium Download PDF

Info

Publication number
WO2019061660A1
WO2019061660A1 PCT/CN2017/108764 CN2017108764W WO2019061660A1 WO 2019061660 A1 WO2019061660 A1 WO 2019061660A1 CN 2017108764 W CN2017108764 W CN 2017108764W WO 2019061660 A1 WO2019061660 A1 WO 2019061660A1
Authority
WO
WIPO (PCT)
Prior art keywords
organ
performance level
training
face
performance
Prior art date
Application number
PCT/CN2017/108764
Other languages
French (fr)
Chinese (zh)
Inventor
王健宗
王晨羽
马进
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019061660A1 publication Critical patent/WO2019061660A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/105Human resources
    • G06Q10/1053Employment or hiring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Definitions

  • the present application relates to the field of computer technologies, and in particular, to a recruitment method, an electronic device, and a readable storage medium.
  • the purpose of the present application is to provide a recruitment method, an electronic device, and a readable storage medium, which are intended to assist recruitment by identifying the face of a candidate and predicting corresponding performance.
  • a first aspect of the present application provides an electronic device, where the electronic device includes a memory and a processor, where the memory stores a recruitment system executable on the processor, and the recruitment system is The following steps are implemented when the processor is executed:
  • the face photo is identified by using a predetermined recognition model, and the performance level corresponding to the face photo is obtained, so that the recruiter can recruit the candidate according to the obtained performance level;
  • the predetermined The recognition model is a deep convolutional neural network model obtained by training a preset number of face sample images marked with different performance levels in advance.
  • a second aspect of the present application provides a recruitment method, the recruitment method comprising:
  • Step 1 Obtain a photo of the face of the candidate
  • Step 2 Identifying the face photo by using a predetermined recognition model, and obtaining a performance level corresponding to the face photo, for the recruiter to refer to the obtained performance level to recruit the candidate; wherein, the advance
  • the determined recognition model is a deep convolutional neural network model obtained by training a preset number of face sample pictures marked with different performance levels in advance.
  • a third aspect of the present application provides a computer readable storage medium storing a recruiting system, the recruiting system being executable by at least one processor to cause the at least one processor to perform the following steps:
  • the recognition model is a deep convolutional neural network model obtained by training a preset number of face sample images marked with different performance levels in advance.
  • the recruitment method, system and readable storage medium proposed by the present application identify the face photos of the employed personnel by using a deep convolutional neural network model trained based on a preset number of face sample images marked with different performance levels. According to the recognition result, the performance level corresponding to the candidate is predicted. Through deep learning, find out the relationship between the face and the performance level, and determine the corresponding performance level according to the face of the candidate when recruiting, and can use the predicted performance level of the candidate as the reference factor for recruitment. To provide a new recruitment reference factor to assist recruitment and improve the quality of recruitment.
  • FIG. 1 is a schematic diagram of an operating environment of a preferred embodiment of the recruitment system 10 of the present application.
  • FIG. 2 is a schematic flowchart of an embodiment of a recruitment method of the present application.
  • first, second and the like in the present application are for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated. .
  • features defining “first” and “second” may include at least one of the features, either explicitly or implicitly.
  • the technical solutions between the various embodiments may be combined with each other, but must be based on the realization of those skilled in the art, and when the combination of the technical solutions is contradictory or impossible to implement, it should be considered that the combination of the technical solutions does not exist. Nor is it within the scope of protection required by this application.
  • FIG. 1 is a schematic diagram of an operating environment of a preferred embodiment of the recruitment system 10 of the present application.
  • the recruitment system 10 is installed and operated in the electronic device 1.
  • the electronic device 1 may include, but is not limited to, a memory 11, a processor 12, and a display 13.
  • Figure 1 shows only the electronic device 1 with components 11-13, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
  • the memory 11 comprises at least one type of readable storage medium, which in some embodiments may be an internal storage unit of the electronic device 1, such as a hard disk or memory of the electronic device 1.
  • the memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (SMC), and a secure digital device. (Secure Digital, SD) card, flash card, etc.
  • the memory 11 may also include both an internal storage unit of the electronic device 1 and an external storage device.
  • the memory 11 is configured to store application software installed on the electronic device 1 and various types of data, such as program codes of the recruiting system 10 and the like.
  • the memory 11 can also be used to temporarily store data that has been output or is about to be output.
  • the processor 12 in some embodiments, may be a central processing unit (CPU), a microprocessor or other data processing chip for running program code or processing data stored in the memory 11, for example
  • the recruitment system 10 and the like are executed.
  • the display 13 in some embodiments may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch sensor, or the like.
  • the display 13 is used to display information processed in the electronic device 1 and a user interface for displaying visualization, such as a face photo of a candidate, a performance level corresponding to a face photo, and the like.
  • the components 11-13 of the electronic device 1 communicate with one another via a system bus.
  • the recruiting system 10 includes at least one computer readable instructions stored in the memory 11, the at least one computer readable instructions being executable by the processor 12 to implement various embodiments of the present application.
  • step S1 a photo of the face of the candidate is obtained.
  • the recruiting system 10 receives a face photo of the applicant sent by the user, for example, receiving a photo of the face of the applicant sent by the user through a mobile phone, a tablet computer, a self-service terminal device, and the like, such as receiving the user on the mobile phone or tablet.
  • the face image of the applicant sent by the pre-installed client in the terminal such as the self-service terminal device, or the face image of the candidate sent by the user on the browser system in the terminal such as the mobile phone, the tablet computer, the self-service terminal device, and the like .
  • the face image of the applicant may be various image format types such as JPEG, PNG, and GIF, and is not limited herein.
  • the face photo may be a face photographing frame of a candidate who is first provided by the system, and the applicant takes a picture in the photo frame of the applicant's face and uploads the face of the candidate.
  • the photo is sent to the system so that the photo of the candidate's face received by the system is a uniform specification.
  • the face photo may also be an image processing of the received face photo after receiving the face photo sent by the applicant, for example, the received face photo may be cropped to be unified.
  • the processing of the pixel size and the like is advantageous for the subsequent recognition process of the face image of the corresponding recruiter.
  • step S2 the face image is identified by using a predetermined recognition model, and the performance level corresponding to the face photo is obtained, so that the recruiter can recruit the candidate by referring to the obtained performance level;
  • the determined recognition model is a deep convolutional neural network model obtained by training a preset number of face sample pictures marked with different performance levels in advance.
  • the pre-trained recognition model is used to identify the face image of the employed person, and the recognition result of the face image of the applicant in the recognition model is identified.
  • the recognition model can be continuously trained, learned, verified, optimized, etc. by manually identifying a plurality of preset number of face sample images marked with different performance levels, so as to train them to accurately identify the corresponding scores of different performance levels.
  • Model may employ a Convolutional Neural Network (CNN) model such as AlexNet, caffeNet or ResNet, and the like.
  • CNN Convolutional Neural Network
  • the different performance levels marked can be five levels of “A, B, C, D, E”, or “Excellent, Good, Medium, and Poor”, which can be extracted from the preset employee performance appraisal database.
  • the face sample picture, the preset employee performance appraisal database contains the performance appraisal level of each employee in the historical performance appraisal record in the actual work and the corresponding employee face image, which can be extracted from the preset employee performance appraisal database.
  • the employee face image of the performance level employee is used as the face sample picture, and the performance level corresponding to the employee is marked, and the recognition model can be obtained based on the face sample picture and the performance level of the annotation.
  • the trained recognition model can be used to identify the face image of the employed person, and the performance level corresponding to the face image of the candidate is identified, such as “A, B, C, D , E” or "Excellent, Good, Medium, Poor” one of the performance levels.
  • the training process of the predetermined recognition model is as follows:
  • A. Prepare a corresponding preset number of face sample images for each preset performance level (such as "A, B, C, D, E” or "Y, L, ZHONG, D", etc.) for each sample.
  • the picture is labeled with the corresponding performance level.
  • the employee face image of each employee of different performance levels may be extracted from the preset employee performance appraisal database as a face sample picture, and the performance level corresponding to the employee is marked.
  • the model training is performed by performing image preprocessing such as cropping and flipping on each face sample image, so that each face sample image is processed into a standard face sample image with uniform specifications and the face is in the middle, which can effectively improve the model. The authenticity and accuracy of the training.
  • the verification set to verify the accuracy of the training recognition model if the accuracy rate is greater than or equal to the preset accuracy rate (for example, 95%), the training ends, or if the accuracy rate is less than the preset accuracy rate, increase The number of face sample pictures corresponding to each performance level and the above steps B, C, D, and E are re-executed until the accuracy of the trained recognition model is greater than or equal to the preset accuracy rate.
  • the preset accuracy rate for example, 95%
  • the recruiter can refer to the obtained performance level to recruit the candidate. For example, different scores are set according to the level of performance, and the higher the performance level of the applicants identified by the trained recognition model, the higher the recruitment score, or the performance level is excellent or A can be added to the recruitment score. Points to assist recruitment.
  • the present embodiment identifies a face image of a hired person by using a deep convolutional neural network model obtained by training a preset number of face sample pictures marked with different performance levels, and according to the recognition result. Predict the performance level of the candidate. Through deep learning, find out the relationship between the face and the performance level, and determine the corresponding performance level according to the face of the candidate when recruiting, and can use the predicted performance level of the candidate as the reference factor for recruitment. To provide a new recruitment reference factor to assist recruitment and improve the quality of recruitment.
  • the method further includes:
  • the employee face images of employees with different performance levels can be extracted from the preset employee performance appraisal database.
  • the facial features such as eyebrows, eyes, ears, nose, and/or mouth in each employee's face photo may be extracted, and the number of preset classification features of each organ in the photo of the facial features corresponding to different performance levels may be counted.
  • the performance classification characteristics of each organ corresponding to different performance levels are determined. That is, the characteristics of each organ of the facial features in the employee's face photos of employees of different performance levels are statistically summarized.
  • each organ of the facial features include, but are not limited to, the curvature of the upper lip and the inner corner of the two eyes. The distance, the angle formed by the connection of the tip of the nose with the two corners of the mouth, the shape of the eyebrows, the type of the ear, and the like. According to these reference features, some statistical analysis can be done. For example, among the reference features corresponding to each organ in the photos of the facial features corresponding to different performance levels, which of the feature values in the same class have a smaller variance, between different classes. The variance is large. It is also possible to put certain category reference features (such as the type of nose) into the second classifier for training. If a higher accuracy is obtained, the reference feature has a value as a classification reference.
  • category reference features such as the type of nose
  • the preset classification feature is an example of the distance between the two inner corners of the eye.
  • the employee's face photo corresponding to each performance level is counted as the eye of the facial features.
  • the number of distributions of the distance parameters between the two inner corners of the eye such as the number of distributions of the distance parameters between the two inner corners of the eyes in each photo of the number of employee face photos corresponding to the statistical performance level A, may select a number of employee faces
  • the distance parameter between the two inner corners of the upper eye in the photo (but a specific parameter value, or a smaller parameter value range such as 50mm-51mm) is the most distributed eye performance classification feature corresponding to the performance level A. According to this, the eye performance classification characteristics corresponding to each performance level B, C, D, and E can be obtained.
  • the identification models corresponding to the respective organs may be separately trained for each organ, for example, in an optional implementation manner, the training process of the recognition model corresponding to each organ as follows:
  • A. Prepare a corresponding preset number of images of each organ sample in the five senses for each performance level (such as "A, B, C, D, E” or "Excellent, Good, Medium, Poor", etc.) for each organ
  • the sample picture marks a corresponding performance level; wherein each organ sample picture is a picture containing each organ performance classification feature corresponding to each different performance level.
  • a picture of the eye performance classification feature corresponding to each performance level may be prepared.
  • the eye performance classification feature corresponding to the performance level A is “the distance parameter between the two inner corners is between 50 mm and 51 mm”, then A sample picture with a preset number of features corresponding to the "distance between two inner corners of the eye between 50mm and 51mm" can be prepared for performance level A.
  • the verification set to verify the accuracy of the trained recognition model if the accuracy of the recognition model corresponding to all organs is greater than or equal to the preset accuracy rate (for example, 95%), the training ends, or if there is an organ corresponding If the accuracy of the recognition model is less than the preset accuracy rate, the number of organ sample images corresponding to different performance levels of the organ is increased and steps B, C, and D are re-executed. That is to say, the accuracy of the recognition model corresponding to all organs can reach a certain requirement (for example, 95%), and the model training process can be completed.
  • the preset accuracy rate for example, 95%
  • the trained recognition model can be used to identify the face image of the employed person. First, extract the facial features in the face image of the applicant, and then use the identification model corresponding to each of the trained organs to identify the performance level corresponding to each organ in the extracted facial features, and then identify each of the facial features of the applicant. The performance level corresponding to the organ. Finally, according to the preset different organ weights, and the corresponding performance levels of the identified organs, the performance level corresponding to the face image of the candidate is calculated. Specifically, in an optional implementation manner, the formula for calculating the performance level M corresponding to the face photo is:
  • a is the weighting coefficient of the performance level M1 corresponding to the eyebrow
  • b is the weighting coefficient of the performance level M2 corresponding to the eye
  • c is the weighting coefficient of the performance level M3 corresponding to the ear
  • d is the weighting coefficient of the performance level M4 corresponding to the nose
  • e is the weighting factor of the performance level M5 corresponding to the mouth.
  • the weight coefficients a, b, c, d, e of different organs may be preset, such as the weight coefficient a of the eyebrows is 0.1, the weight coefficient b of the eye is 0.3, the weight coefficient c of the ear is 0.3, and the weight coefficient of the nose d is 0.1 and the weight coefficient d of the mouth is 0.2.
  • the corresponding indicator values are assigned to different performance levels. For example, the performance level corresponds to an indicator value of 5, the performance level “good” corresponds to an indicator value of 4, and the performance level “middle” corresponds to an indicator value of 2.
  • the performance level “poor” corresponds to an indicator value of 1.
  • the performance level of the eyebrows in the facial features of the candidate is “good”, the performance level of the eye is “good”, and the performance level of the ear is “medium”.
  • FIG. 2 is a schematic flowchart of an embodiment of a recruitment method according to the present application.
  • the recruitment method includes the following steps:
  • step S10 a photo of the face of the candidate is obtained.
  • the recruiting system receives the face photo of the applicant sent by the user, for example, receiving a photo of the face of the applicant sent by the user through a mobile phone, a tablet computer, a self-service terminal device, and the like, such as receiving the user on the mobile phone, the tablet computer, A photo of the candidate's face sent by the pre-installed client in the terminal such as the self-service terminal device, or a photo of the candidate's face sent by the user on the browser system in the terminal such as a mobile phone, a tablet computer, or a self-service terminal device.
  • the face image of the applicant may be various image format types such as JPEG, PNG, and GIF, and is not limited herein.
  • the face photo may be a face photographing frame of a candidate who is first provided by the system, and the applicant takes a picture in the photo frame of the applicant's face and uploads the face of the candidate.
  • the photo is sent to the system so that the photo of the candidate's face received by the system is a uniform specification.
  • the face photo may also be an image processing of the received face photo after receiving the face photo sent by the applicant, for example, the received face photo may be cropped to a uniform pixel size, etc., to facilitate subsequent The identification process of the face image of the applicant is more precise.
  • Step S20 identifying the face photo by using a predetermined recognition model, Obtaining a performance level corresponding to the face photo for the recruiter to refer to the obtained performance level to recruit the candidate; wherein the predetermined recognition model is a pre-determined number of preset people with different performance levels A deep convolutional neural network model obtained by training a face sample picture.
  • the pre-trained recognition model is used to identify the face image of the employed person, and the recognition result of the face image of the applicant in the recognition model is identified.
  • the recognition model can be continuously trained, learned, verified, optimized, etc. by manually identifying a plurality of preset number of face sample images marked with different performance levels, so as to train them to accurately identify the corresponding scores of different performance levels.
  • Model may employ a Convolutional Neural Network (CNN) model such as AlexNet, caffeNet or ResNet, and the like.
  • CNN Convolutional Neural Network
  • the different performance levels marked can be five levels of “A, B, C, D, E”, or “Excellent, Good, Medium, and Poor”, which can be extracted from the preset employee performance appraisal database.
  • the face sample picture, the preset employee performance appraisal database contains the performance appraisal level of each employee in the historical performance appraisal record in the actual work and the corresponding employee face image, which can be extracted from the preset employee performance appraisal database.
  • the employee face image of the performance level employee is used as the face sample picture, and the performance level corresponding to the employee is marked, and the recognition model can be obtained based on the face sample picture and the performance level of the annotation.
  • the trained recognition model can be used to identify the face image of the employed person, and the performance level corresponding to the face image of the candidate is identified, such as “A, B, C, D , E” or "Excellent, Good, Medium, Poor” one of the performance levels.
  • the training process of the predetermined recognition model is as follows:
  • A. Prepare a corresponding preset number of face sample images for each preset performance level (such as "A, B, C, D, E” or "Y, L, ZHONG, D", etc.) for each sample.
  • the picture is labeled with the corresponding performance level.
  • the employee face image of each employee of different performance levels may be extracted from the preset employee performance appraisal database as a face sample picture, and the performance level corresponding to the employee is marked.
  • the model training is performed by performing image preprocessing such as cropping and flipping on each face sample image, so that each face sample image is processed into a standard face sample image with uniform specifications and the face is in the middle, which can effectively improve the model. The authenticity and accuracy of the training.
  • the recruiter can refer to the obtained performance level to recruit the candidate. For example, different scores are set according to the level of performance, and the higher the performance level of the applicants identified by the trained recognition model, the higher the recruitment score, or the performance level is excellent or A can be added to the recruitment score. Points to assist recruitment.
  • the present embodiment identifies a face image of a hired person by using a deep convolutional neural network model obtained by training a preset number of face sample pictures marked with different performance levels, and according to the recognition result. Predict the performance level of the candidate. Through deep learning, find out the relationship between the face and the performance level, and determine the corresponding performance level according to the face of the candidate when recruiting, and can use the predicted performance level of the candidate as the reference factor for recruitment. To provide a new recruitment reference factor to assist recruitment and improve the quality of recruitment.
  • the method further includes:
  • the employee face images of employees with different performance levels can be extracted from the preset employee performance appraisal database.
  • the facial features such as eyebrows, eyes, ears, nose, and/or mouth in each employee's face photo may be extracted, and the number of preset classification features of each organ in the photo of the facial features corresponding to different performance levels may be counted.
  • the performance classification characteristics of each organ corresponding to different performance levels are determined. That is, the characteristics of each organ of the facial features in the employee's face photos of employees of different performance levels are statistically summarized.
  • each organ of the facial features include, but are not limited to, the curvature of the upper lip and the inner corner of the two eyes. The distance, the angle formed by the connection of the tip of the nose with the two corners of the mouth, the shape of the eyebrows, the type of the ear, and the like. According to these reference features, some statistical analysis can be done. For example, among the reference features corresponding to each organ in the photos of the facial features corresponding to different performance levels, which of the feature values in the same class have a smaller variance, between different classes. The variance is large. It is also possible to put certain category reference features (such as the type of nose) into the second classifier for training. If a higher accuracy is obtained, the reference feature has a value as a classification reference.
  • category reference features such as the type of nose
  • the preset classification feature is an example of the distance between the two inner corners of the eye.
  • the employee's face photo corresponding to each performance level is counted as the eye of the facial features.
  • the number of distribution parameters of the distance between the two inner corners of the eye such as the number of photos of the employee's face photo corresponding to the statistical performance level A
  • the number of distance parameters between the two inner corners of the eye in the film, the distance parameter between the two inner corners of the upper eye in the number of employees' faces may be selected (but a specific parameter value, or a smaller one)
  • the parameter value range is 50mm-51mm.
  • the eye distribution performance feature corresponding to the performance level A is the most distributed. According to this, the eye performance classification characteristics corresponding to each performance level B, C, D, E can be obtained.
  • the identification models corresponding to the respective organs may be separately trained for each organ, for example, in an optional implementation manner, the training process of the recognition model corresponding to each organ as follows:
  • A. Prepare a corresponding preset number of images of each organ sample in the five senses for each performance level (such as "A, B, C, D, E” or "Excellent, Good, Medium, Poor", etc.) for each organ
  • the sample picture marks a corresponding performance level; wherein each organ sample picture is a picture containing each organ performance classification feature corresponding to each different performance level.
  • a picture of the eye performance classification feature corresponding to each performance level may be prepared.
  • the eye performance classification feature corresponding to the performance level A is “the distance parameter between the two inner corners is between 50 mm and 51 mm”, then A sample picture with a preset number of features corresponding to the "distance between two inner corners of the eye between 50mm and 51mm" can be prepared for performance level A.
  • the verification set to verify the accuracy of the trained recognition model if the accuracy of the recognition model corresponding to all organs is greater than or equal to the preset accuracy rate (for example, 95%), the training ends, or if there is an organ corresponding If the accuracy of the recognition model is less than the preset accuracy rate, the number of organ sample images corresponding to different performance levels of the organ is increased and steps B, C, and D are re-executed. That is to say, the accuracy of the recognition model corresponding to all organs can reach a certain requirement (for example, 95%), and the model training process can be completed.
  • the preset accuracy rate for example, 95%
  • the trained recognition model can be used to identify the face image of the employed person. First, extract the facial features in the face image of the applicant, and then use the identification model corresponding to each of the trained organs to identify the performance level corresponding to each organ in the extracted facial features, and then identify each of the facial features of the applicant. The performance level corresponding to the organ. Finally, according to the preset different organ weights, and the corresponding performance levels of the identified organs, the performance level corresponding to the face image of the candidate is calculated. Specifically, in an optional implementation manner, the formula for calculating the performance level M corresponding to the face photo is:
  • a is the weighting coefficient of the performance level M1 corresponding to the eyebrow
  • b is the weighting coefficient of the performance level M2 corresponding to the eye
  • c is the weighting coefficient of the performance level M3 corresponding to the ear
  • d is the weighting coefficient of the performance level M4 corresponding to the nose
  • e is the weighting factor of the performance level M5 corresponding to the mouth.
  • the weight coefficients a, b, c, d, e of different organs may be preset, such as the weight coefficient a of the eyebrows is 0.1, the weight coefficient b of the eye is 0.3, the weight coefficient c of the ear is 0.3, and the weight coefficient of the nose d is 0.1 and the weight coefficient d of the mouth is 0.2.
  • the corresponding indicator values are assigned to different performance levels. For example, the performance level corresponds to an indicator value of 5, the performance level “good” corresponds to an indicator value of 4, and the performance level “middle” corresponds to an indicator value of 2.
  • the performance level “poor” corresponds to an indicator value of 1.
  • the performance level of the eyebrows in the facial features of the candidate is “good”, the performance level of the eye is “good”, and the performance level of the ear is “medium”.
  • the present application also provides a computer readable storage medium storing a recruitment system, the recruitment system being executable by at least one processor to cause the at least one processor to perform the implementation as described above
  • the steps of the recruitment method in the example, the specific implementation processes of the steps S10, S20, etc. of the recruitment method are as described above, and are not described herein again.
  • the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and can also be implemented by hardware, but in many cases, the former is A better implementation.
  • the technical solution of the present application which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the methods described in various embodiments of the present application.

Abstract

A recruitment method, an electronic device, and a readable storage medium. The method comprises: obtaining face photos of applicants; recognizing the face photos utilizing a pre-determined recognition model to obtain performance levels corresponding to the face photos, such that a recruiter recruits the applicants with reference to the obtained performance levels, the pre-determined recognition model being a depth convolution neural network model obtained in advance by training a preset number of face sample images marked with different performance levels. The method assists in recruitment by providing a new recruitment reference factor, such that the quality of recruitment is improved.

Description

招聘方法、电子装置及可读存储介质Recruitment method, electronic device and readable storage medium
本申请基于巴黎公约申明享有2017年9月30日递交的申请号为CN 201710916039.1、名称为“招聘方法、电子装置及可读存储介质”中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合在本申请中。The present application is based on the priority of the Chinese Patent Application entitled "Recruitment Method, Electronic Device and Readable Storage Medium" filed on September 30, 2017, with the application number of CN 201710916039. The manner of reference is incorporated in the present application.
技术领域Technical field
本申请涉及计算机技术领域,尤其涉及一种招聘方法、电子装置及可读存储介质。The present application relates to the field of computer technologies, and in particular, to a recruitment method, an electronic device, and a readable storage medium.
背景技术Background technique
现有的招聘方式大多是传统的笔试、面试等,并没有通过应聘人的“面相”来发掘潜力人才,帮助招聘的技术方案。Most of the existing recruitment methods are traditional written tests, interviews, etc., and there is no “face to face” of candidates to explore potential talents and help recruit technical solutions.
发明内容Summary of the invention
本申请的目的在于提供一种招聘方法、电子装置及可读存储介质,旨在通过识别应聘人员的人脸,预测对应的绩效,以辅助招聘。The purpose of the present application is to provide a recruitment method, an electronic device, and a readable storage medium, which are intended to assist recruitment by identifying the face of a candidate and predicting corresponding performance.
为实现上述目的,本申请第一方面提供一种电子装置,所述电子装置包括存储器、处理器,所述存储器上存储有可在所述处理器上运行的招聘系统,所述招聘系统被所述处理器执行时实现如下步骤:In order to achieve the above object, a first aspect of the present application provides an electronic device, where the electronic device includes a memory and a processor, where the memory stores a recruitment system executable on the processor, and the recruitment system is The following steps are implemented when the processor is executed:
A1、获取应聘人员的人脸照片;A1. Obtain a photo of the face of the applicant;
B1、对所述人脸照片利用预先确定的识别模型进行识别,得到所述人脸照片对应的绩效等级,以供招聘人员参考得到的绩效等级对该应聘人员进行招聘;其中,所述预先确定的识别模型为预先通过对标注有不同绩效等级的预设数量人脸样本图片进行训练得到的深度卷积神经网络模型。B1, the face photo is identified by using a predetermined recognition model, and the performance level corresponding to the face photo is obtained, so that the recruiter can recruit the candidate according to the obtained performance level; wherein the predetermined The recognition model is a deep convolutional neural network model obtained by training a preset number of face sample images marked with different performance levels in advance.
本申请第二方面提供一种招聘方法,所述招聘方法包括:A second aspect of the present application provides a recruitment method, the recruitment method comprising:
步骤一、获取应聘人员的人脸照片;Step 1: Obtain a photo of the face of the candidate;
步骤二、对所述人脸照片利用预先确定的识别模型进行识别,得到所述人脸照片对应的绩效等级,以供招聘人员参考得到的绩效等级对该应聘人员进行招聘;其中,所述预先确定的识别模型为预先通过对标注有不同绩效等级的预设数量人脸样本图片进行训练得到的深度卷积神经网络模型。Step 2: Identifying the face photo by using a predetermined recognition model, and obtaining a performance level corresponding to the face photo, for the recruiter to refer to the obtained performance level to recruit the candidate; wherein, the advance The determined recognition model is a deep convolutional neural network model obtained by training a preset number of face sample pictures marked with different performance levels in advance.
本申请第三方面提供一种计算机可读存储介质,所述计算机可读存储介质存储有招聘系统,所述招聘系统可被至少一个处理器执行,以使所述至少一个处理器执行如下步骤: A third aspect of the present application provides a computer readable storage medium storing a recruiting system, the recruiting system being executable by at least one processor to cause the at least one processor to perform the following steps:
A2、获取应聘人员的人脸照片;A2. Obtain a photo of the face of the applicant;
B2、对所述人脸照片利用预先确定的识别模型进行识别,得到所述人脸照片对应的绩效等级,以供招聘人员参考得到的绩效等级对该应聘人员进行招聘;其中,所述预先确定的识别模型为预先通过对标注有不同绩效等级的预设数量人脸样本图片进行训练得到的深度卷积神经网络模型。B2. Identifying the face photo by using a predetermined recognition model, and obtaining a performance level corresponding to the face photo, for the recruiter to refer to the obtained performance level to recruit the candidate; wherein the predetermined The recognition model is a deep convolutional neural network model obtained by training a preset number of face sample images marked with different performance levels in advance.
本申请提出的招聘方法、系统及可读存储介质,通过基于标注有不同绩效等级的预设数量人脸样本图片进行训练得到的深度卷积神经网络模型来对应聘人员的人脸照片进行识别,并根据识别结果预测应聘人员对应的绩效等级。通过深度学习,找出人脸与绩效等级之间的联系,并在进行招聘时根据应聘人员的人脸确定对应的绩效等级,能将预测出的应聘人员对应的绩效等级作为招聘的参考因素,以提供一种新的招聘参考因素来辅助招聘,提高招聘质量。The recruitment method, system and readable storage medium proposed by the present application identify the face photos of the employed personnel by using a deep convolutional neural network model trained based on a preset number of face sample images marked with different performance levels. According to the recognition result, the performance level corresponding to the candidate is predicted. Through deep learning, find out the relationship between the face and the performance level, and determine the corresponding performance level according to the face of the candidate when recruiting, and can use the predicted performance level of the candidate as the reference factor for recruitment. To provide a new recruitment reference factor to assist recruitment and improve the quality of recruitment.
附图说明DRAWINGS
图1为本申请招聘系统10较佳实施例的运行环境示意图;1 is a schematic diagram of an operating environment of a preferred embodiment of the recruitment system 10 of the present application;
图2为本申请招聘方法一实施例的流程示意图。FIG. 2 is a schematic flowchart of an embodiment of a recruitment method of the present application.
具体实施方式Detailed ways
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make the objects, technical solutions, and advantages of the present application more comprehensible, the present application will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the application and are not intended to be limiting. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present application without departing from the inventive scope are the scope of the present application.
需要说明的是,在本申请中涉及“第一”、“第二”等的描述仅用于描述目的,而不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。另外,各个实施例之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本申请要求的保护范围之内。It should be noted that the descriptions of "first", "second" and the like in the present application are for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated. . Thus, features defining "first" and "second" may include at least one of the features, either explicitly or implicitly. In addition, the technical solutions between the various embodiments may be combined with each other, but must be based on the realization of those skilled in the art, and when the combination of the technical solutions is contradictory or impossible to implement, it should be considered that the combination of the technical solutions does not exist. Nor is it within the scope of protection required by this application.
本申请提供一种招聘系统。请参阅图1,是本申请招聘系统10较佳实施例的运行环境示意图。The application provides a recruitment system. Please refer to FIG. 1 , which is a schematic diagram of an operating environment of a preferred embodiment of the recruitment system 10 of the present application.
在本实施例中,所述的招聘系统10安装并运行于电子装置1中。该电子装置1可包括,但不仅限于,存储器11、处理器12及显示器13。图1仅示出了具有组件11-13的电子装置1,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。 In the present embodiment, the recruitment system 10 is installed and operated in the electronic device 1. The electronic device 1 may include, but is not limited to, a memory 11, a processor 12, and a display 13. Figure 1 shows only the electronic device 1 with components 11-13, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
所述存储器11至少包括一种类型的可读存储介质,所述存储器11在一些实施例中可以是所述电子装置1的内部存储单元,例如该电子装置1的硬盘或内存。所述存储器11在另一些实施例中也可以是所述电子装置1的外部存储设备,例如所述电子装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器11还可以既包括所述电子装置1的内部存储单元也包括外部存储设备。所述存储器11用于存储安装于所述电子装置1的应用软件及各类数据,例如所述招聘系统10的程序代码等。所述存储器11还可以用于暂时地存储已经输出或者将要输出的数据。The memory 11 comprises at least one type of readable storage medium, which in some embodiments may be an internal storage unit of the electronic device 1, such as a hard disk or memory of the electronic device 1. The memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (SMC), and a secure digital device. (Secure Digital, SD) card, flash card, etc. Further, the memory 11 may also include both an internal storage unit of the electronic device 1 and an external storage device. The memory 11 is configured to store application software installed on the electronic device 1 and various types of data, such as program codes of the recruiting system 10 and the like. The memory 11 can also be used to temporarily store data that has been output or is about to be output.
所述处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU),微处理器或其他数据处理芯片,用于运行所述存储器11中存储的程序代码或处理数据,例如执行所述招聘系统10等。The processor 12, in some embodiments, may be a central processing unit (CPU), a microprocessor or other data processing chip for running program code or processing data stored in the memory 11, for example The recruitment system 10 and the like are executed.
所述显示器13在一些实施例中可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。所述显示器13用于显示在所述电子装置1中处理的信息以及用于显示可视化的用户界面,例如应聘人员的人脸照片、人脸照片对应的绩效等级等。所述电子装置1的部件11-13通过系统总线相互通信。The display 13 in some embodiments may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch sensor, or the like. The display 13 is used to display information processed in the electronic device 1 and a user interface for displaying visualization, such as a face photo of a candidate, a performance level corresponding to a face photo, and the like. The components 11-13 of the electronic device 1 communicate with one another via a system bus.
所述招聘系统10包括至少一个存储在所述存储器11中的计算机可读指令,该至少一个计算机可读指令可被所述处理器12执行,以实现本申请各实施例。The recruiting system 10 includes at least one computer readable instructions stored in the memory 11, the at least one computer readable instructions being executable by the processor 12 to implement various embodiments of the present application.
其中,上述招聘系统10被所述处理器12执行时实现如下步骤:Wherein, when the above recruitment system 10 is executed by the processor 12, the following steps are implemented:
步骤S1,获取应聘人员的人脸照片。In step S1, a photo of the face of the candidate is obtained.
本实施例中,招聘系统10接收用户发送的应聘人员的人脸照片,例如,接收用户通过手机、平板电脑、自助终端设备等终端发送的应聘人员人脸照片,如接收用户在手机、平板电脑、自助终端设备等终端中预先安装的客户端上发送来的应聘人员人脸照片,或接收用户在手机、平板电脑、自助终端设备等终端中的浏览器系统上发送来的应聘人员人脸照片。其中,该应聘人员人脸照片可以为JPEG、PNG、GIF等各种图片格式类型,在此不做限定。In this embodiment, the recruiting system 10 receives a face photo of the applicant sent by the user, for example, receiving a photo of the face of the applicant sent by the user through a mobile phone, a tablet computer, a self-service terminal device, and the like, such as receiving the user on the mobile phone or tablet. The face image of the applicant sent by the pre-installed client in the terminal such as the self-service terminal device, or the face image of the candidate sent by the user on the browser system in the terminal such as the mobile phone, the tablet computer, the self-service terminal device, and the like . The face image of the applicant may be various image format types such as JPEG, PNG, and GIF, and is not limited herein.
可选地,该人脸照片可以是先由系统提供一统一规格的应聘人员人脸照片拍摄框,应聘人员在提供的该应聘人员人脸照片拍摄框中进行拍摄并上传该应聘人员的人脸照片至系统,以使得系统接收到的应聘人员人脸照片均为统一规格。Optionally, the face photo may be a face photographing frame of a candidate who is first provided by the system, and the applicant takes a picture in the photo frame of the applicant's face and uploads the face of the candidate. The photo is sent to the system so that the photo of the candidate's face received by the system is a uniform specification.
该人脸照片还可以是在接收到应聘人员发送的人脸照片后,对接收的人脸照片进行图像处理,如可对接收的人脸照片进行裁剪为统一 像素大小等处理,以利于后续对应聘人员人脸照片的识别过程更加精确。The face photo may also be an image processing of the received face photo after receiving the face photo sent by the applicant, for example, the received face photo may be cropped to be unified. The processing of the pixel size and the like is advantageous for the subsequent recognition process of the face image of the corresponding recruiter.
步骤S2,对所述人脸照片利用预先确定的识别模型进行识别,得到所述人脸照片对应的绩效等级,以供招聘人员参考得到的绩效等级对该应聘人员进行招聘;其中,所述预先确定的识别模型为预先通过对标注有不同绩效等级的预设数量人脸样本图片进行训练得到的深度卷积神经网络模型。In step S2, the face image is identified by using a predetermined recognition model, and the performance level corresponding to the face photo is obtained, so that the recruiter can recruit the candidate by referring to the obtained performance level; The determined recognition model is a deep convolutional neural network model obtained by training a preset number of face sample pictures marked with different performance levels in advance.
本实施例中,在获取到应聘人员的人脸照片后,利用预先训练好的识别模型对应聘人员的人脸照片进行识别,识别出应聘人员的人脸照片在识别模型中的识别结果。该识别模型可预先通过对大量标注有不同绩效等级的预设数量人脸样本图片进行识别来不断进行训练、学习、验证、优化等,以将其训练成能准确识别出不同绩效等级对应的标注的模型。例如,该识别模型可采用深度卷积神经网络模型(Convolutional Neural Network,CNN)模型如AlexNet,caffeNet或者ResNet等等。例如,标注的不同绩效等级可以是“A、B、C、D、E”五个等级,或“优、良、中、差”四个等级,可从预设的员工绩效考核数据库中抽取人脸样本图片,预设的员工绩效考核数据库中包含在实际工作中的历史绩效考核记录中各个员工的绩效考核等级以及对应的员工人脸图片,可从预设的员工绩效考核数据库中抽取各个不同绩效等级的员工的员工人脸图片作为人脸样本图片,并标注上该员工对应的绩效等级,即可基于人脸样本图片及其标注的绩效等级进行训练得到识别模型。在获取到应聘人员的人脸照片后,可利用训练好的识别模型对应聘人员的人脸照片进行识别,识别出应聘人员的人脸照片对应的绩效等级,如“A、B、C、D、E”或“优、良、中、差”中的一个绩效等级。In this embodiment, after obtaining the face image of the candidate, the pre-trained recognition model is used to identify the face image of the employed person, and the recognition result of the face image of the applicant in the recognition model is identified. The recognition model can be continuously trained, learned, verified, optimized, etc. by manually identifying a plurality of preset number of face sample images marked with different performance levels, so as to train them to accurately identify the corresponding scores of different performance levels. Model. For example, the recognition model may employ a Convolutional Neural Network (CNN) model such as AlexNet, caffeNet or ResNet, and the like. For example, the different performance levels marked can be five levels of “A, B, C, D, E”, or “Excellent, Good, Medium, and Poor”, which can be extracted from the preset employee performance appraisal database. The face sample picture, the preset employee performance appraisal database contains the performance appraisal level of each employee in the historical performance appraisal record in the actual work and the corresponding employee face image, which can be extracted from the preset employee performance appraisal database. The employee face image of the performance level employee is used as the face sample picture, and the performance level corresponding to the employee is marked, and the recognition model can be obtained based on the face sample picture and the performance level of the annotation. After obtaining the face image of the candidate, the trained recognition model can be used to identify the face image of the employed person, and the performance level corresponding to the face image of the candidate is identified, such as “A, B, C, D , E" or "Excellent, Good, Medium, Poor" one of the performance levels.
在一种可选的实施方式中,所述预先确定的识别模型的训练过程如下:In an optional implementation manner, the training process of the predetermined recognition model is as follows:
A、为各个预设的绩效等级(如“A、B、C、D、E”或“优、良、中、差”等)准备对应的预设数量的人脸样本图片,为每个样本图片标注对应的绩效等级。如可从预设的员工绩效考核数据库中抽取各个不同绩效等级的员工的员工人脸图片作为人脸样本图片,并标注上该员工对应的绩效等级。A. Prepare a corresponding preset number of face sample images for each preset performance level (such as "A, B, C, D, E" or "Y, L, ZHONG, D", etc.) for each sample. The picture is labeled with the corresponding performance level. For example, the employee face image of each employee of different performance levels may be extracted from the preset employee performance appraisal database as a face sample picture, and the performance level corresponding to the employee is marked.
B、将各个人脸样本图片进行图片预处理以获得待模型训练的训练图片。通过对各个人脸样本图片进行图片预处理如裁剪、翻转等操作后才进行模型训练,以将各个人脸样本图片处理为统一规格且脸部处于正中的标准人脸样本图片,能有效提高模型训练的真实性及准确率。B. Perform image preprocessing on each face sample image to obtain a training picture to be trained by the model. The model training is performed by performing image preprocessing such as cropping and flipping on each face sample image, so that each face sample image is processed into a standard face sample image with uniform specifications and the face is in the middle, which can effectively improve the model. The authenticity and accuracy of the training.
C、将所有训练图片分为第一比例(例如,75%)的训练集、第 二比例(例如,25%)的验证集;C. Divide all training pictures into a first scale (for example, 75%) of the training set, a second ratio (eg, 25%) of the validation set;
D、利用所述训练集训练所述预先确定的识别模型;D. training the predetermined recognition model by using the training set;
E、利用所述验证集验证训练的识别模型的准确率,若准确率大于或者等于预设准确率(例如,95%),则训练结束,或者,若准确率小于预设准确率,则增加各个绩效等级对应的人脸样本图片数量并重新执行上述步骤B、C、D、E,直至训练的识别模型的准确率大于或者等于预设准确率。E. Using the verification set to verify the accuracy of the training recognition model, if the accuracy rate is greater than or equal to the preset accuracy rate (for example, 95%), the training ends, or if the accuracy rate is less than the preset accuracy rate, increase The number of face sample pictures corresponding to each performance level and the above steps B, C, D, and E are re-executed until the accuracy of the trained recognition model is greater than or equal to the preset accuracy rate.
在利用训练好的识别模型识别出应聘人员的人脸照片对应的绩效等级后,招聘人员可参考得到的绩效等级对该应聘人员进行招聘。例如,按绩效等级的高低设定不同的评分,利用训练好的识别模型识别出的应聘人员绩效等级越高,其招聘评分越高,或绩效等级为优或A的可对其招聘评分额外加分,从而辅助招聘。After using the trained recognition model to identify the performance level corresponding to the face image of the candidate, the recruiter can refer to the obtained performance level to recruit the candidate. For example, different scores are set according to the level of performance, and the higher the performance level of the applicants identified by the trained recognition model, the higher the recruitment score, or the performance level is excellent or A can be added to the recruitment score. Points to assist recruitment.
与现有技术相比,本实施例通过基于标注有不同绩效等级的预设数量人脸样本图片进行训练得到的深度卷积神经网络模型来对应聘人员的人脸照片进行识别,并根据识别结果预测应聘人员对应的绩效等级。通过深度学习,找出人脸与绩效等级之间的联系,并在进行招聘时根据应聘人员的人脸确定对应的绩效等级,能将预测出的应聘人员对应的绩效等级作为招聘的参考因素,以提供一种新的招聘参考因素来辅助招聘,提高招聘质量。Compared with the prior art, the present embodiment identifies a face image of a hired person by using a deep convolutional neural network model obtained by training a preset number of face sample pictures marked with different performance levels, and according to the recognition result. Predict the performance level of the candidate. Through deep learning, find out the relationship between the face and the performance level, and determine the corresponding performance level according to the face of the candidate when recruiting, and can use the predicted performance level of the candidate as the reference factor for recruitment. To provide a new recruitment reference factor to assist recruitment and improve the quality of recruitment.
在一可选的实施例中,在上述图1的实施例的基础上,所述招聘系统10被所述处理器12执行实现所述步骤S2时,还包括:In an optional embodiment, on the basis of the foregoing embodiment of FIG. 1 , when the recruiting system 10 is executed by the processor 12 to implement the step S2, the method further includes:
从预设数据库中找出各个不同绩效等级对应的员工人脸照片,如可从预设的员工绩效考核数据库中抽取各个不同绩效等级的员工的员工人脸图片。进一步地,还可提取出各个员工人脸照片中的五官部位如眉毛、眼睛、耳朵、鼻子和/或嘴巴,统计各个不同绩效等级对应的五官部位照片中各个器官的预设分类特征分布数量,并根据统计结果确定各个不同绩效等级对应的各个器官绩效分类特征。即对不同绩效等级员工的员工人脸照片中的五官部位各个器官的特征进行特征统计归纳,针对五官部位各个器官具体可以参考的特征包括但不限于:上嘴唇的弧度、两个内眼角之间的距离、鼻尖与两个嘴角连接形成的角度、眉毛的形状、耳朵的种类等等。根据这些参考特征,可以做一些统计学上的分析,例如,各个不同绩效等级对应的五官部位照片中各个器官对应的参考特征中哪些特征值在同一类内样本的方差较小,在不同类间的方差较大。也可以将某些类别参考特征(比如鼻子的种类)放入二分类器中训练,若能取得较高的准确率,说明这种参考特征有作为分类参考的价值。 Find the employee face photos corresponding to different performance levels from the preset database. For example, the employee face images of employees with different performance levels can be extracted from the preset employee performance appraisal database. Further, the facial features such as eyebrows, eyes, ears, nose, and/or mouth in each employee's face photo may be extracted, and the number of preset classification features of each organ in the photo of the facial features corresponding to different performance levels may be counted. According to the statistical results, the performance classification characteristics of each organ corresponding to different performance levels are determined. That is, the characteristics of each organ of the facial features in the employee's face photos of employees of different performance levels are statistically summarized. The specific features that can be referred to for each organ of the facial features include, but are not limited to, the curvature of the upper lip and the inner corner of the two eyes. The distance, the angle formed by the connection of the tip of the nose with the two corners of the mouth, the shape of the eyebrows, the type of the ear, and the like. According to these reference features, some statistical analysis can be done. For example, among the reference features corresponding to each organ in the photos of the facial features corresponding to different performance levels, which of the feature values in the same class have a smaller variance, between different classes. The variance is large. It is also possible to put certain category reference features (such as the type of nose) into the second classifier for training. If a higher accuracy is obtained, the reference feature has a value as a classification reference.
在此以五官部位中的眼睛,预设分类特征即对应的参考特征为两个内眼角之间距离为例进行具体说明,统计各个不同绩效等级对应的员工人脸照片即五官部位照片中眼睛的两个内眼角之间距离参数的分布数量,如统计绩效等级A对应的若干数量员工人脸照片中各个照片中眼睛的两个内眼角之间距离参数的分布数量,可选取若干数量员工人脸照片中上眼睛的两个内眼角之间距离参数(可是一具体参数值,也可以是一较小的参数值范围如50mm-51mm)分布数量最多的作为该绩效等级A对应的眼睛绩效分类特征,依此可以得到各个绩效等级B、C、D、E对应的眼睛绩效分类特征。Here, in the eyes of the facial features, the preset classification feature, that is, the corresponding reference feature, is an example of the distance between the two inner corners of the eye. The employee's face photo corresponding to each performance level is counted as the eye of the facial features. The number of distributions of the distance parameters between the two inner corners of the eye, such as the number of distributions of the distance parameters between the two inner corners of the eyes in each photo of the number of employee face photos corresponding to the statistical performance level A, may select a number of employee faces The distance parameter between the two inner corners of the upper eye in the photo (but a specific parameter value, or a smaller parameter value range such as 50mm-51mm) is the most distributed eye performance classification feature corresponding to the performance level A. According to this, the eye performance classification characteristics corresponding to each performance level B, C, D, and E can be obtained.
在确定出各个不同绩效等级对应的各个器官绩效分类特征后,可针对各个器官分别训练各个器官对应的识别模型,例如,在一种可选的实施方式中,各个器官对应的识别模型的训练过程如下:After determining the performance classification characteristics of each organ corresponding to different performance levels, the identification models corresponding to the respective organs may be separately trained for each organ, for example, in an optional implementation manner, the training process of the recognition model corresponding to each organ as follows:
A、为各个不同绩效等级(如“A、B、C、D、E”或“优、良、中、差”等)准备对应的预设数量的五官中各个器官样本图片,为每个器官样本图片标记对应的绩效等级;其中,各个器官样本图片为包含各个不同绩效等级对应的各个器官绩效分类特征的图片。例如,针对眼睛器官,可准备各个不同绩效等级对应的眼睛绩效分类特征的图片,如绩效等级A对应的眼睛绩效分类特征为“两个内眼角之间距离参数在50mm-51mm之间”,则可针对绩效等级A准备预设数量符合“两个内眼角之间距离参数在50mm-51mm之间”特征的样本图片。A. Prepare a corresponding preset number of images of each organ sample in the five senses for each performance level (such as "A, B, C, D, E" or "Excellent, Good, Medium, Poor", etc.) for each organ The sample picture marks a corresponding performance level; wherein each organ sample picture is a picture containing each organ performance classification feature corresponding to each different performance level. For example, for the eye organ, a picture of the eye performance classification feature corresponding to each performance level may be prepared. For example, the eye performance classification feature corresponding to the performance level A is “the distance parameter between the two inner corners is between 50 mm and 51 mm”, then A sample picture with a preset number of features corresponding to the "distance between two inner corners of the eye between 50mm and 51mm" can be prepared for performance level A.
B、将各个器官对应的器官样本图片分为第一比例(例如,75%)的训练集、第二比例(例如,25%)的验证集;B. Dividing the organ sample picture corresponding to each organ into a training set of a first ratio (for example, 75%) and a verification set of a second ratio (for example, 25%);
C、利用所述训练集训练各个器官对应的识别模型;C. Training the identification model corresponding to each organ by using the training set;
D、利用所述验证集验证训练的识别模型的准确率,若所有器官对应的识别模型的准确率大于或者等于预设准确率(例如,95%),则训练结束,或者,若有器官对应的识别模型的准确率小于预设准确率,则增加该器官的各个不同绩效等级对应的器官样本图片数量并重新执行上述步骤B、C、D。即最终使得所有器官对应的识别模型的准确率均能达到一定要求(例如,95%),则可完成模型训练过程。D. Using the verification set to verify the accuracy of the trained recognition model, if the accuracy of the recognition model corresponding to all organs is greater than or equal to the preset accuracy rate (for example, 95%), the training ends, or if there is an organ corresponding If the accuracy of the recognition model is less than the preset accuracy rate, the number of organ sample images corresponding to different performance levels of the organ is increased and steps B, C, and D are re-executed. That is to say, the accuracy of the recognition model corresponding to all organs can reach a certain requirement (for example, 95%), and the model training process can be completed.
在训练好各个器官对应的识别模型后,可利用训练好的识别模型对应聘人员的人脸照片进行识别。首先提取出应聘人员的人脸照片中的五官部位,再利用训练好的各个器官对应的识别模型识别出提取的五官部位中各器官对应的绩效等级,即可识别出应聘人员的五官部位中各器官对应的绩效等级。最后根据预设的不同器官权重,以及识别出的应聘人员的各器官对应的绩效等级即可计算得到最终该应聘人员的人脸照片对应的绩效等级。具体地,在一种可选的实施方式中,计算所述人脸照片对应的绩效等级M的公式为:After training the recognition model corresponding to each organ, the trained recognition model can be used to identify the face image of the employed person. First, extract the facial features in the face image of the applicant, and then use the identification model corresponding to each of the trained organs to identify the performance level corresponding to each organ in the extracted facial features, and then identify each of the facial features of the applicant. The performance level corresponding to the organ. Finally, according to the preset different organ weights, and the corresponding performance levels of the identified organs, the performance level corresponding to the face image of the candidate is calculated. Specifically, in an optional implementation manner, the formula for calculating the performance level M corresponding to the face photo is:
M=a*M1+b*M2+c*M3+d*M4+e*M5 M=a*M1+b*M2+c*M3+d*M4+e*M5
其中,a为眉毛对应的绩效等级M1的权重系数,b为眼睛对应的绩效等级M2的权重系数,c为耳朵对应的绩效等级M3的权重系数,d为鼻子对应的绩效等级M4的权重系数,e为嘴巴对应的绩效等级M5的权重系数。Where a is the weighting coefficient of the performance level M1 corresponding to the eyebrow, b is the weighting coefficient of the performance level M2 corresponding to the eye, c is the weighting coefficient of the performance level M3 corresponding to the ear, and d is the weighting coefficient of the performance level M4 corresponding to the nose, e is the weighting factor of the performance level M5 corresponding to the mouth.
例如,可预先设定不同器官的权重系数a、b、c、d、e,如眉毛的权重系数a为0.1,眼睛的权重系数b为0.3,耳朵的权重系数c为0.3,鼻子的权重系数d为0.1,嘴巴的权重系数d为0.2。同时,为不同的绩效等级分配相应的指标值,如绩效等级“优”对应的指标值为5,绩效等级“良”对应的指标值为4,绩效等级“中”对应的指标值为2,绩效等级“差”对应的指标值为1。则若利用训练好的识别模型对应聘人员的人脸照片进行识别得到该应聘人员的五官部位中眉毛的绩效等级为“良”,眼睛的绩效等级为“良”,耳朵的绩效等级为“中”,鼻子的绩效等级为“差”,嘴巴的绩效等级为“优”,则可计算得到最终该应聘人员的人脸照片对应的绩效等级M=0.1*4+0.3*4+0.3*2+0.1*1+0.2*5=3.3,则可依此计算出不同应聘人员的绩效等级M评分,按评分高低进行排序以辅助招聘。还可预先设定绩效等级M评分与绩效等级的对应关系,根据应聘人员的绩效等级M评分归类于不同的绩效等级。For example, the weight coefficients a, b, c, d, e of different organs may be preset, such as the weight coefficient a of the eyebrows is 0.1, the weight coefficient b of the eye is 0.3, the weight coefficient c of the ear is 0.3, and the weight coefficient of the nose d is 0.1 and the weight coefficient d of the mouth is 0.2. At the same time, the corresponding indicator values are assigned to different performance levels. For example, the performance level corresponds to an indicator value of 5, the performance level “good” corresponds to an indicator value of 4, and the performance level “middle” corresponds to an indicator value of 2. The performance level "poor" corresponds to an indicator value of 1. Then, if the trained recognition model is used to identify the face image of the employed person, the performance level of the eyebrows in the facial features of the candidate is “good”, the performance level of the eye is “good”, and the performance level of the ear is “medium”. "The performance level of the nose is "poor", and the performance level of the mouth is "excellent", then the performance level corresponding to the face image of the candidate is calculated to be M=0.1*4+0.3*4+0.3*2+ 0.1*1+0.2*5=3.3, then the performance grade M scores of different candidates can be calculated according to this, sorted according to the level of the score to assist recruitment. It is also possible to pre-set the correspondence between the performance level M score and the performance level, and classify them according to the performance level of the candidate to different performance levels.
如图2所示,图2为本申请招聘方法一实施例的流程示意图,该招聘方法包括以下步骤:As shown in FIG. 2, FIG. 2 is a schematic flowchart of an embodiment of a recruitment method according to the present application. The recruitment method includes the following steps:
步骤S10,获取应聘人员的人脸照片。In step S10, a photo of the face of the candidate is obtained.
本实施例中,招聘系统接收用户发送的应聘人员的人脸照片,例如,接收用户通过手机、平板电脑、自助终端设备等终端发送的应聘人员人脸照片,如接收用户在手机、平板电脑、自助终端设备等终端中预先安装的客户端上发送来的应聘人员人脸照片,或接收用户在手机、平板电脑、自助终端设备等终端中的浏览器系统上发送来的应聘人员人脸照片。其中,该应聘人员人脸照片可以为JPEG、PNG、GIF等各种图片格式类型,在此不做限定。In this embodiment, the recruiting system receives the face photo of the applicant sent by the user, for example, receiving a photo of the face of the applicant sent by the user through a mobile phone, a tablet computer, a self-service terminal device, and the like, such as receiving the user on the mobile phone, the tablet computer, A photo of the candidate's face sent by the pre-installed client in the terminal such as the self-service terminal device, or a photo of the candidate's face sent by the user on the browser system in the terminal such as a mobile phone, a tablet computer, or a self-service terminal device. The face image of the applicant may be various image format types such as JPEG, PNG, and GIF, and is not limited herein.
可选地,该人脸照片可以是先由系统提供一统一规格的应聘人员人脸照片拍摄框,应聘人员在提供的该应聘人员人脸照片拍摄框中进行拍摄并上传该应聘人员的人脸照片至系统,以使得系统接收到的应聘人员人脸照片均为统一规格。Optionally, the face photo may be a face photographing frame of a candidate who is first provided by the system, and the applicant takes a picture in the photo frame of the applicant's face and uploads the face of the candidate. The photo is sent to the system so that the photo of the candidate's face received by the system is a uniform specification.
该人脸照片还可以是在接收到应聘人员发送的人脸照片后,对接收的人脸照片进行图像处理,如可对接收的人脸照片进行裁剪为统一像素大小等处理,以利于后续对应聘人员人脸照片的识别过程更加精确。The face photo may also be an image processing of the received face photo after receiving the face photo sent by the applicant, for example, the received face photo may be cropped to a uniform pixel size, etc., to facilitate subsequent The identification process of the face image of the applicant is more precise.
步骤S20,对所述人脸照片利用预先确定的识别模型进行识别, 得到所述人脸照片对应的绩效等级,以供招聘人员参考得到的绩效等级对该应聘人员进行招聘;其中,所述预先确定的识别模型为预先通过对标注有不同绩效等级的预设数量人脸样本图片进行训练得到的深度卷积神经网络模型。Step S20, identifying the face photo by using a predetermined recognition model, Obtaining a performance level corresponding to the face photo for the recruiter to refer to the obtained performance level to recruit the candidate; wherein the predetermined recognition model is a pre-determined number of preset people with different performance levels A deep convolutional neural network model obtained by training a face sample picture.
本实施例中,在获取到应聘人员的人脸照片后,利用预先训练好的识别模型对应聘人员的人脸照片进行识别,识别出应聘人员的人脸照片在识别模型中的识别结果。该识别模型可预先通过对大量标注有不同绩效等级的预设数量人脸样本图片进行识别来不断进行训练、学习、验证、优化等,以将其训练成能准确识别出不同绩效等级对应的标注的模型。例如,该识别模型可采用深度卷积神经网络模型(Convolutional Neural Network,CNN)模型如AlexNet,caffeNet或者ResNet等等。例如,标注的不同绩效等级可以是“A、B、C、D、E”五个等级,或“优、良、中、差”四个等级,可从预设的员工绩效考核数据库中抽取人脸样本图片,预设的员工绩效考核数据库中包含在实际工作中的历史绩效考核记录中各个员工的绩效考核等级以及对应的员工人脸图片,可从预设的员工绩效考核数据库中抽取各个不同绩效等级的员工的员工人脸图片作为人脸样本图片,并标注上该员工对应的绩效等级,即可基于人脸样本图片及其标注的绩效等级进行训练得到识别模型。在获取到应聘人员的人脸照片后,可利用训练好的识别模型对应聘人员的人脸照片进行识别,识别出应聘人员的人脸照片对应的绩效等级,如“A、B、C、D、E”或“优、良、中、差”中的一个绩效等级。In this embodiment, after obtaining the face image of the candidate, the pre-trained recognition model is used to identify the face image of the employed person, and the recognition result of the face image of the applicant in the recognition model is identified. The recognition model can be continuously trained, learned, verified, optimized, etc. by manually identifying a plurality of preset number of face sample images marked with different performance levels, so as to train them to accurately identify the corresponding scores of different performance levels. Model. For example, the recognition model may employ a Convolutional Neural Network (CNN) model such as AlexNet, caffeNet or ResNet, and the like. For example, the different performance levels marked can be five levels of “A, B, C, D, E”, or “Excellent, Good, Medium, and Poor”, which can be extracted from the preset employee performance appraisal database. The face sample picture, the preset employee performance appraisal database contains the performance appraisal level of each employee in the historical performance appraisal record in the actual work and the corresponding employee face image, which can be extracted from the preset employee performance appraisal database. The employee face image of the performance level employee is used as the face sample picture, and the performance level corresponding to the employee is marked, and the recognition model can be obtained based on the face sample picture and the performance level of the annotation. After obtaining the face image of the candidate, the trained recognition model can be used to identify the face image of the employed person, and the performance level corresponding to the face image of the candidate is identified, such as “A, B, C, D , E" or "Excellent, Good, Medium, Poor" one of the performance levels.
在一种可选的实施方式中,所述预先确定的识别模型的训练过程如下:In an optional implementation manner, the training process of the predetermined recognition model is as follows:
A、为各个预设的绩效等级(如“A、B、C、D、E”或“优、良、中、差”等)准备对应的预设数量的人脸样本图片,为每个样本图片标注对应的绩效等级。如可从预设的员工绩效考核数据库中抽取各个不同绩效等级的员工的员工人脸图片作为人脸样本图片,并标注上该员工对应的绩效等级。A. Prepare a corresponding preset number of face sample images for each preset performance level (such as "A, B, C, D, E" or "Y, L, ZHONG, D", etc.) for each sample. The picture is labeled with the corresponding performance level. For example, the employee face image of each employee of different performance levels may be extracted from the preset employee performance appraisal database as a face sample picture, and the performance level corresponding to the employee is marked.
B、将各个人脸样本图片进行图片预处理以获得待模型训练的训练图片。通过对各个人脸样本图片进行图片预处理如裁剪、翻转等操作后才进行模型训练,以将各个人脸样本图片处理为统一规格且脸部处于正中的标准人脸样本图片,能有效提高模型训练的真实性及准确率。B. Perform image preprocessing on each face sample image to obtain a training picture to be trained by the model. The model training is performed by performing image preprocessing such as cropping and flipping on each face sample image, so that each face sample image is processed into a standard face sample image with uniform specifications and the face is in the middle, which can effectively improve the model. The authenticity and accuracy of the training.
C、将所有训练图片分为第一比例(例如,75%)的训练集、第二比例(例如,25%)的验证集;C. Divide all training pictures into a training set of a first ratio (for example, 75%) and a verification set of a second ratio (for example, 25%);
D、利用所述训练集训练所述预先确定的识别模型;D. training the predetermined recognition model by using the training set;
E、利用所述验证集验证训练的识别模型的准确率,若准确率大 于或者等于预设准确率(例如,95%),则训练结束,或者,若准确率小于预设准确率,则增加各个绩效等级对应的人脸样本图片数量并重新执行上述步骤B、C、D、E,直至训练的识别模型的准确率大于或者等于预设准确率。E. verifying the accuracy of the trained recognition model by using the verification set, if the accuracy is large At or equal to the preset accuracy rate (for example, 95%), the training ends, or, if the accuracy rate is less than the preset accuracy rate, increase the number of face sample pictures corresponding to each performance level and re-execute the above steps B, C, D, E, until the accuracy of the training recognition model is greater than or equal to the preset accuracy.
在利用训练好的识别模型识别出应聘人员的人脸照片对应的绩效等级后,招聘人员可参考得到的绩效等级对该应聘人员进行招聘。例如,按绩效等级的高低设定不同的评分,利用训练好的识别模型识别出的应聘人员绩效等级越高,其招聘评分越高,或绩效等级为优或A的可对其招聘评分额外加分,从而辅助招聘。After using the trained recognition model to identify the performance level corresponding to the face image of the candidate, the recruiter can refer to the obtained performance level to recruit the candidate. For example, different scores are set according to the level of performance, and the higher the performance level of the applicants identified by the trained recognition model, the higher the recruitment score, or the performance level is excellent or A can be added to the recruitment score. Points to assist recruitment.
与现有技术相比,本实施例通过基于标注有不同绩效等级的预设数量人脸样本图片进行训练得到的深度卷积神经网络模型来对应聘人员的人脸照片进行识别,并根据识别结果预测应聘人员对应的绩效等级。通过深度学习,找出人脸与绩效等级之间的联系,并在进行招聘时根据应聘人员的人脸确定对应的绩效等级,能将预测出的应聘人员对应的绩效等级作为招聘的参考因素,以提供一种新的招聘参考因素来辅助招聘,提高招聘质量。Compared with the prior art, the present embodiment identifies a face image of a hired person by using a deep convolutional neural network model obtained by training a preset number of face sample pictures marked with different performance levels, and according to the recognition result. Predict the performance level of the candidate. Through deep learning, find out the relationship between the face and the performance level, and determine the corresponding performance level according to the face of the candidate when recruiting, and can use the predicted performance level of the candidate as the reference factor for recruitment. To provide a new recruitment reference factor to assist recruitment and improve the quality of recruitment.
在一可选的实施例中,在上述实施例的基础上,该方法还包括:In an optional embodiment, based on the foregoing embodiment, the method further includes:
从预设数据库中找出各个不同绩效等级对应的员工人脸照片,如可从预设的员工绩效考核数据库中抽取各个不同绩效等级的员工的员工人脸图片。进一步地,还可提取出各个员工人脸照片中的五官部位如眉毛、眼睛、耳朵、鼻子和/或嘴巴,统计各个不同绩效等级对应的五官部位照片中各个器官的预设分类特征分布数量,并根据统计结果确定各个不同绩效等级对应的各个器官绩效分类特征。即对不同绩效等级员工的员工人脸照片中的五官部位各个器官的特征进行特征统计归纳,针对五官部位各个器官具体可以参考的特征包括但不限于:上嘴唇的弧度、两个内眼角之间的距离、鼻尖与两个嘴角连接形成的角度、眉毛的形状、耳朵的种类等等。根据这些参考特征,可以做一些统计学上的分析,例如,各个不同绩效等级对应的五官部位照片中各个器官对应的参考特征中哪些特征值在同一类内样本的方差较小,在不同类间的方差较大。也可以将某些类别参考特征(比如鼻子的种类)放入二分类器中训练,若能取得较高的准确率,说明这种参考特征有作为分类参考的价值。Find the employee face photos corresponding to different performance levels from the preset database. For example, the employee face images of employees with different performance levels can be extracted from the preset employee performance appraisal database. Further, the facial features such as eyebrows, eyes, ears, nose, and/or mouth in each employee's face photo may be extracted, and the number of preset classification features of each organ in the photo of the facial features corresponding to different performance levels may be counted. According to the statistical results, the performance classification characteristics of each organ corresponding to different performance levels are determined. That is, the characteristics of each organ of the facial features in the employee's face photos of employees of different performance levels are statistically summarized. The specific features that can be referred to for each organ of the facial features include, but are not limited to, the curvature of the upper lip and the inner corner of the two eyes. The distance, the angle formed by the connection of the tip of the nose with the two corners of the mouth, the shape of the eyebrows, the type of the ear, and the like. According to these reference features, some statistical analysis can be done. For example, among the reference features corresponding to each organ in the photos of the facial features corresponding to different performance levels, which of the feature values in the same class have a smaller variance, between different classes. The variance is large. It is also possible to put certain category reference features (such as the type of nose) into the second classifier for training. If a higher accuracy is obtained, the reference feature has a value as a classification reference.
在此以五官部位中的眼睛,预设分类特征即对应的参考特征为两个内眼角之间距离为例进行具体说明,统计各个不同绩效等级对应的员工人脸照片即五官部位照片中眼睛的两个内眼角之间距离参数的分布数量,如统计绩效等级A对应的若干数量员工人脸照片中各个照 片中眼睛的两个内眼角之间距离参数的分布数量,可选取若干数量员工人脸照片中上眼睛的两个内眼角之间距离参数(可是一具体参数值,也可以是一较小的参数值范围如50mm-51mm)分布数量最多的作为该绩效等级A对应的眼睛绩效分类特征,依此可以得到各个绩效等级B、C、D、E对应的眼睛绩效分类特征。Here, in the eyes of the facial features, the preset classification feature, that is, the corresponding reference feature, is an example of the distance between the two inner corners of the eye. The employee's face photo corresponding to each performance level is counted as the eye of the facial features. The number of distribution parameters of the distance between the two inner corners of the eye, such as the number of photos of the employee's face photo corresponding to the statistical performance level A The number of distance parameters between the two inner corners of the eye in the film, the distance parameter between the two inner corners of the upper eye in the number of employees' faces may be selected (but a specific parameter value, or a smaller one) The parameter value range is 50mm-51mm. The eye distribution performance feature corresponding to the performance level A is the most distributed. According to this, the eye performance classification characteristics corresponding to each performance level B, C, D, E can be obtained.
在确定出各个不同绩效等级对应的各个器官绩效分类特征后,可针对各个器官分别训练各个器官对应的识别模型,例如,在一种可选的实施方式中,各个器官对应的识别模型的训练过程如下:After determining the performance classification characteristics of each organ corresponding to different performance levels, the identification models corresponding to the respective organs may be separately trained for each organ, for example, in an optional implementation manner, the training process of the recognition model corresponding to each organ as follows:
A、为各个不同绩效等级(如“A、B、C、D、E”或“优、良、中、差”等)准备对应的预设数量的五官中各个器官样本图片,为每个器官样本图片标记对应的绩效等级;其中,各个器官样本图片为包含各个不同绩效等级对应的各个器官绩效分类特征的图片。例如,针对眼睛器官,可准备各个不同绩效等级对应的眼睛绩效分类特征的图片,如绩效等级A对应的眼睛绩效分类特征为“两个内眼角之间距离参数在50mm-51mm之间”,则可针对绩效等级A准备预设数量符合“两个内眼角之间距离参数在50mm-51mm之间”特征的样本图片。A. Prepare a corresponding preset number of images of each organ sample in the five senses for each performance level (such as "A, B, C, D, E" or "Excellent, Good, Medium, Poor", etc.) for each organ The sample picture marks a corresponding performance level; wherein each organ sample picture is a picture containing each organ performance classification feature corresponding to each different performance level. For example, for the eye organ, a picture of the eye performance classification feature corresponding to each performance level may be prepared. For example, the eye performance classification feature corresponding to the performance level A is “the distance parameter between the two inner corners is between 50 mm and 51 mm”, then A sample picture with a preset number of features corresponding to the "distance between two inner corners of the eye between 50mm and 51mm" can be prepared for performance level A.
B、将各个器官对应的器官样本图片分为第一比例(例如,75%)的训练集、第二比例(例如,25%)的验证集;B. Dividing the organ sample picture corresponding to each organ into a training set of a first ratio (for example, 75%) and a verification set of a second ratio (for example, 25%);
C、利用所述训练集训练各个器官对应的识别模型;C. Training the identification model corresponding to each organ by using the training set;
D、利用所述验证集验证训练的识别模型的准确率,若所有器官对应的识别模型的准确率大于或者等于预设准确率(例如,95%),则训练结束,或者,若有器官对应的识别模型的准确率小于预设准确率,则增加该器官的各个不同绩效等级对应的器官样本图片数量并重新执行上述步骤B、C、D。即最终使得所有器官对应的识别模型的准确率均能达到一定要求(例如,95%),则可完成模型训练过程。D. Using the verification set to verify the accuracy of the trained recognition model, if the accuracy of the recognition model corresponding to all organs is greater than or equal to the preset accuracy rate (for example, 95%), the training ends, or if there is an organ corresponding If the accuracy of the recognition model is less than the preset accuracy rate, the number of organ sample images corresponding to different performance levels of the organ is increased and steps B, C, and D are re-executed. That is to say, the accuracy of the recognition model corresponding to all organs can reach a certain requirement (for example, 95%), and the model training process can be completed.
在训练好各个器官对应的识别模型后,可利用训练好的识别模型对应聘人员的人脸照片进行识别。首先提取出应聘人员的人脸照片中的五官部位,再利用训练好的各个器官对应的识别模型识别出提取的五官部位中各器官对应的绩效等级,即可识别出应聘人员的五官部位中各器官对应的绩效等级。最后根据预设的不同器官权重,以及识别出的应聘人员的各器官对应的绩效等级即可计算得到最终该应聘人员的人脸照片对应的绩效等级。具体地,在一种可选的实施方式中,计算所述人脸照片对应的绩效等级M的公式为:After training the recognition model corresponding to each organ, the trained recognition model can be used to identify the face image of the employed person. First, extract the facial features in the face image of the applicant, and then use the identification model corresponding to each of the trained organs to identify the performance level corresponding to each organ in the extracted facial features, and then identify each of the facial features of the applicant. The performance level corresponding to the organ. Finally, according to the preset different organ weights, and the corresponding performance levels of the identified organs, the performance level corresponding to the face image of the candidate is calculated. Specifically, in an optional implementation manner, the formula for calculating the performance level M corresponding to the face photo is:
M=a*M1+b*M2+c*M3+d*M4+e*M5M=a*M1+b*M2+c*M3+d*M4+e*M5
其中,a为眉毛对应的绩效等级M1的权重系数,b为眼睛对应的绩效等级M2的权重系数,c为耳朵对应的绩效等级M3的权重系数,d为鼻子对应的绩效等级M4的权重系数,e为嘴巴对应的绩效等级M5的权重系数。 Where a is the weighting coefficient of the performance level M1 corresponding to the eyebrow, b is the weighting coefficient of the performance level M2 corresponding to the eye, c is the weighting coefficient of the performance level M3 corresponding to the ear, and d is the weighting coefficient of the performance level M4 corresponding to the nose, e is the weighting factor of the performance level M5 corresponding to the mouth.
例如,可预先设定不同器官的权重系数a、b、c、d、e,如眉毛的权重系数a为0.1,眼睛的权重系数b为0.3,耳朵的权重系数c为0.3,鼻子的权重系数d为0.1,嘴巴的权重系数d为0.2。同时,为不同的绩效等级分配相应的指标值,如绩效等级“优”对应的指标值为5,绩效等级“良”对应的指标值为4,绩效等级“中”对应的指标值为2,绩效等级“差”对应的指标值为1。则若利用训练好的识别模型对应聘人员的人脸照片进行识别得到该应聘人员的五官部位中眉毛的绩效等级为“良”,眼睛的绩效等级为“良”,耳朵的绩效等级为“中”,鼻子的绩效等级为“差”,嘴巴的绩效等级为“优”,则可计算得到最终该应聘人员的人脸照片对应的绩效等级M=0.1*4+0.3*4+0.3*2+0.1*1+0.2*5=3.3,则可依此计算出不同应聘人员的绩效等级M评分,按评分高低进行排序以辅助招聘。还可预先设定绩效等级M评分与绩效等级的对应关系,根据应聘人员的绩效等级M评分归类于不同的绩效等级。For example, the weight coefficients a, b, c, d, e of different organs may be preset, such as the weight coefficient a of the eyebrows is 0.1, the weight coefficient b of the eye is 0.3, the weight coefficient c of the ear is 0.3, and the weight coefficient of the nose d is 0.1 and the weight coefficient d of the mouth is 0.2. At the same time, the corresponding indicator values are assigned to different performance levels. For example, the performance level corresponds to an indicator value of 5, the performance level “good” corresponds to an indicator value of 4, and the performance level “middle” corresponds to an indicator value of 2. The performance level "poor" corresponds to an indicator value of 1. Then, if the trained recognition model is used to identify the face image of the employed person, the performance level of the eyebrows in the facial features of the candidate is “good”, the performance level of the eye is “good”, and the performance level of the ear is “medium”. "The performance level of the nose is "poor", and the performance level of the mouth is "excellent", then the performance level corresponding to the face image of the candidate is calculated to be M=0.1*4+0.3*4+0.3*2+ 0.1*1+0.2*5=3.3, then the performance grade M scores of different candidates can be calculated according to this, sorted according to the level of the score to assist recruitment. It is also possible to pre-set the correspondence between the performance level M score and the performance level, and classify them according to the performance level of the candidate to different performance levels.
此外,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质存储有招聘系统,所述招聘系统可被至少一个处理器执行,以使所述至少一个处理器执行如上述实施例中的招聘方法的步骤,该招聘方法的步骤S10、S20等具体实施过程如上文所述,在此不再赘述。Moreover, the present application also provides a computer readable storage medium storing a recruitment system, the recruitment system being executable by at least one processor to cause the at least one processor to perform the implementation as described above The steps of the recruitment method in the example, the specific implementation processes of the steps S10, S20, etc. of the recruitment method are as described above, and are not described herein again.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。It is to be understood that the term "comprises", "comprising", or any other variants thereof, is intended to encompass a non-exclusive inclusion, such that a process, method, article, or device comprising a series of elements includes those elements. It also includes other elements that are not explicitly listed, or elements that are inherent to such a process, method, article, or device. An element that is defined by the phrase "comprising a ..." does not exclude the presence of additional equivalent elements in the process, method, item, or device that comprises the element.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件来实现,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and can also be implemented by hardware, but in many cases, the former is A better implementation. Based on such understanding, the technical solution of the present application, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk, The optical disc includes a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the methods described in various embodiments of the present application.
以上参照附图说明了本申请的优选实施例,并非因此局限本申请的权利范围。上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。另外,虽然在流程图中示出了逻辑顺序,但是在某些情况下, 可以以不同于此处的顺序执行所示出或描述的步骤。The preferred embodiments of the present application have been described above with reference to the drawings, and are not intended to limit the scope of the application. The serial numbers of the embodiments of the present application are merely for the description, and do not represent the advantages and disadvantages of the embodiments. In addition, although the logical order is shown in the flowchart, in some cases, The steps shown or described may be performed in an order different than that herein.
本领域技术人员不脱离本申请的范围和实质,可以有多种变型方案实现本申请,比如作为一个实施例的特征可用于另一实施例而得到又一实施例。凡在运用本申请的技术构思之内所作的任何修改、等同替换和改进,均应在本申请的权利范围之内。 A person skilled in the art can implement the present application in various variants without departing from the scope and spirit of the present application. For example, the features of one embodiment can be used in another embodiment to obtain another embodiment. Any modifications, equivalent substitutions and improvements made within the technical concept of the application should be within the scope of the application.

Claims (20)

  1. 一种电子装置,其特征在于,所述电子装置包括存储器、处理器,所述存储器上存储有可在所述处理器上运行的招聘系统,所述招聘系统被所述处理器执行时实现如下步骤:An electronic device, comprising: a memory, a processor, wherein the memory stores a recruitment system operable on the processor, and the recruitment system is implemented by the processor as follows step:
    A1、获取应聘人员的人脸照片;A1. Obtain a photo of the face of the applicant;
    B1、对所述人脸照片利用预先确定的识别模型进行识别,得到所述人脸照片对应的绩效等级,以供招聘人员参考得到的绩效等级对该应聘人员进行招聘;其中,所述预先确定的识别模型为预先通过对标注有不同绩效等级的预设数量人脸样本图片进行训练得到的深度卷积神经网络模型。B1, the face photo is identified by using a predetermined recognition model, and the performance level corresponding to the face photo is obtained, so that the recruiter can recruit the candidate according to the obtained performance level; wherein the predetermined The recognition model is a deep convolutional neural network model obtained by training a preset number of face sample images marked with different performance levels in advance.
  2. 如权利要求1所述的电子装置,其特征在于,所述招聘系统被所述处理器执行实现所述步骤B1时,还包括:The electronic device according to claim 1, wherein when the recruiting system is executed by the processor to implement the step B1, the method further comprises:
    从预设数据库中找出各个不同绩效等级对应的员工人脸照片,提取出各个员工人脸照片中的五官部位,统计各个不同绩效等级对应的五官部位照片中各个器官的预设分类特征分布数量,并根据统计结果确定各个不同绩效等级对应的各个器官绩效分类特征;Find the employee face photos corresponding to different performance levels from the preset database, extract the facial features in each employee's face photos, and count the number of preset classification features of each organ in the photos of the facial features corresponding to different performance levels. And determining the performance classification characteristics of each organ corresponding to different performance levels according to the statistical results;
    其中,所述五官部位包括眉毛、眼睛、耳朵、鼻子和嘴巴中的至少一个;所述预设分类特征为眉毛的形状、两个内眼角之间的距离、耳朵的种类、鼻尖与两个嘴角连接形成的角度或上嘴唇的弧度。Wherein the facial features include at least one of an eyebrow, an eye, an ear, a nose and a mouth; the predetermined classification feature is a shape of an eyebrow, a distance between two inner corners of the eye, a type of the ear, a tip of the nose and two corners of the mouth Connect the angle formed or the curvature of the upper lip.
  3. 如权利要求1所述的电子装置,其特征在于,所述所述预先确定的识别模型的训练过程如下:The electronic device of claim 1, wherein the training process of the predetermined recognition model is as follows:
    A、为各个不同绩效等级准备对应的预设数量的五官中各个器官样本图片,为每个器官样本图片标记对应的绩效等级;其中,各个器官样本图片为包含各个不同绩效等级对应的各个器官绩效分类特征的图片;A. Prepare corresponding preset number of images of each organ in the five senses for each performance level, and mark the corresponding performance level for each organ sample image; wherein each organ sample image is a performance of each organ corresponding to each different performance level. a picture of the classification feature;
    B、将各个器官对应的器官样本图片分为第一比例的训练集、第二比例的验证集;B. Dividing the image of the organ sample corresponding to each organ into a training set of the first ratio and a verification set of the second ratio;
    C、利用所述训练集训练各个器官对应的识别模型;C. Training the identification model corresponding to each organ by using the training set;
    D、利用所述验证集验证训练的识别模型的准确率,若所有器官对应的识别模型的准确率大于或者等于预设准确率,则训练结束,或者,若有器官对应的识别模型的准确率小于预设准确率,则增加该器官的各个不同绩效等级对应的器官样本图片数量并重新执行上述步骤B、C、D。D. Using the verification set to verify the accuracy of the training recognition model, if the accuracy of the recognition model corresponding to all organs is greater than or equal to the preset accuracy rate, the training ends, or if the accuracy of the identification model corresponding to the organ is If it is less than the preset accuracy rate, the number of organ sample pictures corresponding to different performance levels of the organ is increased and the above steps B, C, and D are re-executed.
  4. 如权利要求2所述的电子装置,其特征在于,所述所述预先 确定的识别模型的训练过程如下:The electronic device according to claim 2, wherein said said advance The training process for the identified recognition model is as follows:
    A、为各个不同绩效等级准备对应的预设数量的五官中各个器官样本图片,为每个器官样本图片标记对应的绩效等级;其中,各个器官样本图片为包含各个不同绩效等级对应的各个器官绩效分类特征的图片;A. Prepare corresponding preset number of images of each organ in the five senses for each performance level, and mark the corresponding performance level for each organ sample image; wherein each organ sample image is a performance of each organ corresponding to each different performance level. a picture of the classification feature;
    B、将各个器官对应的器官样本图片分为第一比例的训练集、第二比例的验证集;B. Dividing the image of the organ sample corresponding to each organ into a training set of the first ratio and a verification set of the second ratio;
    C、利用所述训练集训练各个器官对应的识别模型;C. Training the identification model corresponding to each organ by using the training set;
    D、利用所述验证集验证训练的识别模型的准确率,若所有器官对应的识别模型的准确率大于或者等于预设准确率,则训练结束,或者,若有器官对应的识别模型的准确率小于预设准确率,则增加该器官的各个不同绩效等级对应的器官样本图片数量并重新执行上述步骤B、C、D。D. Using the verification set to verify the accuracy of the training recognition model, if the accuracy of the recognition model corresponding to all organs is greater than or equal to the preset accuracy rate, the training ends, or if the accuracy of the identification model corresponding to the organ is If it is less than the preset accuracy rate, the number of organ sample pictures corresponding to different performance levels of the organ is increased and the above steps B, C, and D are re-executed.
  5. 如权利要求3所述的电子装置,其特征在于,所述招聘系统被所述处理器执行实现所述步骤B1时,还包括:The electronic device according to claim 3, wherein when the recruiting system is executed by the processor to implement the step B1, the method further comprises:
    提取出所述人脸照片中的五官部位;Extracting the facial features in the photo of the face;
    利用训练好的各个器官对应的识别模型识别出提取的五官部位中各器官对应的绩效等级;Identifying the performance level corresponding to each organ in the extracted facial features by using the trained recognition model of each organ;
    根据预设的不同器官权重,以及识别出的各器官对应的绩效等级计算所述人脸照片对应的绩效等级M的公式为:The formula for calculating the performance level M corresponding to the face photo according to the preset different organ weights and the corresponding performance levels of the identified organs is:
    M=a*M1+b*M2+c*M3+d*M4+e*M5M=a*M1+b*M2+c*M3+d*M4+e*M5
    其中,a为眉毛对应的绩效等级M1的权重系数,b为眼睛对应的绩效等级M2的权重系数,c为耳朵对应的绩效等级M3的权重系数,d为鼻子对应的绩效等级M4的权重系数,e为嘴巴对应的绩效等级M5的权重系数。Where a is the weighting coefficient of the performance level M1 corresponding to the eyebrow, b is the weighting coefficient of the performance level M2 corresponding to the eye, c is the weighting coefficient of the performance level M3 corresponding to the ear, and d is the weighting coefficient of the performance level M4 corresponding to the nose, e is the weighting factor of the performance level M5 corresponding to the mouth.
  6. 如权利要求4所述的电子装置,其特征在于,所述招聘系统被所述处理器执行实现所述步骤B1时,还包括:The electronic device according to claim 4, wherein when the recruitment system is executed by the processor to implement the step B1, the method further comprises:
    提取出所述人脸照片中的五官部位;Extracting the facial features in the photo of the face;
    利用训练好的各个器官对应的识别模型识别出提取的五官部位中各器官对应的绩效等级;Identifying the performance level corresponding to each organ in the extracted facial features by using the trained recognition model of each organ;
    根据预设的不同器官权重,以及识别出的各器官对应的绩效等级计算所述人脸照片对应的绩效等级M的公式为:The formula for calculating the performance level M corresponding to the face photo according to the preset different organ weights and the corresponding performance levels of the identified organs is:
    M=a*M1+b*M2+c*M3+d*M4+e*M5M=a*M1+b*M2+c*M3+d*M4+e*M5
    其中,a为眉毛对应的绩效等级M1的权重系数,b为眼睛对应的绩效等级M2的权重系数,c为耳朵对应的绩效等级M3的权重系数,d为鼻子对应的绩效等级M4的权重系数,e为嘴巴对应的绩效 等级M5的权重系数。Where a is the weighting coefficient of the performance level M1 corresponding to the eyebrow, b is the weighting coefficient of the performance level M2 corresponding to the eye, c is the weighting coefficient of the performance level M3 corresponding to the ear, and d is the weighting coefficient of the performance level M4 corresponding to the nose, e is the performance of the mouth The weighting factor of level M5.
  7. 如权利要求1所述的电子装置,其特征在于,所述预先确定的识别模型的训练过程如下:The electronic device according to claim 1, wherein the training process of the predetermined recognition model is as follows:
    A、为各个绩效等级准备对应的预设数量的人脸样本图片,为每个人脸样本图片标记对应的绩效等级;A. Prepare a corresponding preset number of face sample pictures for each performance level, and mark a corresponding performance level for each face sample picture;
    B、将各个人脸样本图片进行图片预处理以获得待模型训练的训练图片;B. Perform image preprocessing on each face sample image to obtain a training picture to be trained by the model;
    C、将所有训练图片分为第一比例的训练集、第二比例的验证集;C. Divide all training pictures into a training set of a first ratio and a verification set of a second ratio;
    D、利用所述训练集训练所述预先确定的识别模型;D. training the predetermined recognition model by using the training set;
    E、利用所述验证集验证训练的识别模型的准确率,若准确率大于或者等于预设准确率,则训练结束,或者,若准确率小于预设准确率,则增加各个绩效等级对应的人脸样本图片数量并重新执行上述步骤B、C、D、E。E. Using the verification set to verify the accuracy of the training recognition model, if the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, or if the accuracy rate is less than the preset accuracy rate, the person corresponding to each performance level is increased. The number of face sample pictures and re-execute steps B, C, D, E above.
  8. 一种招聘方法,其特征在于,所述招聘方法包括:A recruitment method, characterized in that the recruitment method comprises:
    步骤一、获取应聘人员的人脸照片;Step 1: Obtain a photo of the face of the candidate;
    步骤二、对所述人脸照片利用预先确定的识别模型进行识别,得到所述人脸照片对应的绩效等级,以供招聘人员参考得到的绩效等级对该应聘人员进行招聘;其中,所述预先确定的识别模型为预先通过对标注有不同绩效等级的预设数量人脸样本图片进行训练得到的深度卷积神经网络模型。Step 2: Identifying the face photo by using a predetermined recognition model, and obtaining a performance level corresponding to the face photo, for the recruiter to refer to the obtained performance level to recruit the candidate; wherein, the advance The determined recognition model is a deep convolutional neural network model obtained by training a preset number of face sample pictures marked with different performance levels in advance.
  9. 如权利要求8所述的招聘方法,其特征在于,该方法还包括:The hiring method according to claim 8, wherein the method further comprises:
    从预设数据库中找出各个不同绩效等级对应的员工人脸照片,提取出各个员工人脸照片中的五官部位,统计各个不同绩效等级对应的五官部位照片中各个器官的预设分类特征分布数量,并根据统计结果确定各个不同绩效等级对应的各个器官绩效分类特征;Find the employee face photos corresponding to different performance levels from the preset database, extract the facial features in each employee's face photos, and count the number of preset classification features of each organ in the photos of the facial features corresponding to different performance levels. And determining the performance classification characteristics of each organ corresponding to different performance levels according to the statistical results;
    其中,所述五官部位包括眉毛、眼睛、耳朵、鼻子和嘴巴中的至少一个;所述预设分类特征为眉毛的形状、两个内眼角之间的距离、耳朵的种类、鼻尖与两个嘴角连接形成的角度或上嘴唇的弧度。Wherein the facial features include at least one of an eyebrow, an eye, an ear, a nose and a mouth; the predetermined classification feature is a shape of an eyebrow, a distance between two inner corners of the eye, a type of the ear, a tip of the nose and two corners of the mouth Connect the angle formed or the curvature of the upper lip.
  10. 如权利要求8所述的招聘方法,其特征在于,所述预先确定的识别模型的训练过程如下:The recruitment method according to claim 8, wherein the training process of the predetermined recognition model is as follows:
    A、为各个不同绩效等级准备对应的预设数量的五官中各个器官样本图片,为每个器官样本图片标记对应的绩效等级;其中,各个器官样本图片为包含各个不同绩效等级对应的各个器官绩效分类特征的图片; A. Prepare corresponding preset number of images of each organ in the five senses for each performance level, and mark the corresponding performance level for each organ sample image; wherein each organ sample image is a performance of each organ corresponding to each different performance level. a picture of the classification feature;
    B、将各个器官对应的器官样本图片分为第一比例的训练集、第二比例的验证集;B. Dividing the image of the organ sample corresponding to each organ into a training set of the first ratio and a verification set of the second ratio;
    C、利用所述训练集训练各个器官对应的识别模型;C. Training the identification model corresponding to each organ by using the training set;
    D、利用所述验证集验证训练的识别模型的准确率,若所有器官对应的识别模型的准确率大于或者等于预设准确率,则训练结束,或者,若有器官对应的识别模型的准确率小于预设准确率,则增加该器官的各个不同绩效等级对应的器官样本图片数量并重新执行上述步骤B、C、D。D. Using the verification set to verify the accuracy of the training recognition model, if the accuracy of the recognition model corresponding to all organs is greater than or equal to the preset accuracy rate, the training ends, or if the accuracy of the identification model corresponding to the organ is If it is less than the preset accuracy rate, the number of organ sample pictures corresponding to different performance levels of the organ is increased and the above steps B, C, and D are re-executed.
  11. 如权利要求9所述的招聘方法,其特征在于,所述预先确定的识别模型的训练过程如下:The recruitment method according to claim 9, wherein the training process of the predetermined recognition model is as follows:
    A、为各个不同绩效等级准备对应的预设数量的五官中各个器官样本图片,为每个器官样本图片标记对应的绩效等级;其中,各个器官样本图片为包含各个不同绩效等级对应的各个器官绩效分类特征的图片;A. Prepare corresponding preset number of images of each organ in the five senses for each performance level, and mark the corresponding performance level for each organ sample image; wherein each organ sample image is a performance of each organ corresponding to each different performance level. a picture of the classification feature;
    B、将各个器官对应的器官样本图片分为第一比例的训练集、第二比例的验证集;B. Dividing the image of the organ sample corresponding to each organ into a training set of the first ratio and a verification set of the second ratio;
    C、利用所述训练集训练各个器官对应的识别模型;C. Training the identification model corresponding to each organ by using the training set;
    D、利用所述验证集验证训练的识别模型的准确率,若所有器官对应的识别模型的准确率大于或者等于预设准确率,则训练结束,或者,若有器官对应的识别模型的准确率小于预设准确率,则增加该器官的各个不同绩效等级对应的器官样本图片数量并重新执行上述步骤B、C、D。D. Using the verification set to verify the accuracy of the training recognition model, if the accuracy of the recognition model corresponding to all organs is greater than or equal to the preset accuracy rate, the training ends, or if the accuracy of the identification model corresponding to the organ is If it is less than the preset accuracy rate, the number of organ sample pictures corresponding to different performance levels of the organ is increased and the above steps B, C, and D are re-executed.
  12. 如权利要求10所述的招聘方法,其特征在于,所述步骤二包括:The recruitment method of claim 10, wherein the step two comprises:
    提取出所述人脸照片中的五官部位;Extracting the facial features in the photo of the face;
    利用训练好的各个器官对应的识别模型识别出提取的五官部位中各器官对应的绩效等级;Identifying the performance level corresponding to each organ in the extracted facial features by using the trained recognition model of each organ;
    根据预设的不同器官权重,以及识别出的各器官对应的绩效等级计算所述人脸照片对应的绩效等级M的公式为:The formula for calculating the performance level M corresponding to the face photo according to the preset different organ weights and the corresponding performance levels of the identified organs is:
    M=a*M1+b*M2+c*M3+d*M4+e*M5M=a*M1+b*M2+c*M3+d*M4+e*M5
    其中,a为眉毛对应的绩效等级M1的权重系数,b为眼睛对应的绩效等级M2的权重系数,c为耳朵对应的绩效等级M3的权重系数,d为鼻子对应的绩效等级M4的权重系数,e为嘴巴对应的绩效等级M5的权重系数。Where a is the weighting coefficient of the performance level M1 corresponding to the eyebrow, b is the weighting coefficient of the performance level M2 corresponding to the eye, c is the weighting coefficient of the performance level M3 corresponding to the ear, and d is the weighting coefficient of the performance level M4 corresponding to the nose, e is the weighting factor of the performance level M5 corresponding to the mouth.
  13. 如权利要求11所述的招聘方法,其特征在于,所述步骤二 包括:The recruitment method according to claim 11, wherein said step two include:
    提取出所述人脸照片中的五官部位;Extracting the facial features in the photo of the face;
    利用训练好的各个器官对应的识别模型识别出提取的五官部位中各器官对应的绩效等级;Identifying the performance level corresponding to each organ in the extracted facial features by using the trained recognition model of each organ;
    根据预设的不同器官权重,以及识别出的各器官对应的绩效等级计算所述人脸照片对应的绩效等级M的公式为:The formula for calculating the performance level M corresponding to the face photo according to the preset different organ weights and the corresponding performance levels of the identified organs is:
    M=a*M1+b*M2+c*M3+d*M4+e*M5M=a*M1+b*M2+c*M3+d*M4+e*M5
    其中,a为眉毛对应的绩效等级M1的权重系数,b为眼睛对应的绩效等级M2的权重系数,c为耳朵对应的绩效等级M3的权重系数,d为鼻子对应的绩效等级M4的权重系数,e为嘴巴对应的绩效等级M5的权重系数。Where a is the weighting coefficient of the performance level M1 corresponding to the eyebrow, b is the weighting coefficient of the performance level M2 corresponding to the eye, c is the weighting coefficient of the performance level M3 corresponding to the ear, and d is the weighting coefficient of the performance level M4 corresponding to the nose, e is the weighting factor of the performance level M5 corresponding to the mouth.
  14. 如权利要求8所述的招聘方法,其特征在于,所述预先确定的识别模型的训练过程如下:The recruitment method according to claim 8, wherein the training process of the predetermined recognition model is as follows:
    A、为各个绩效等级准备对应的预设数量的人脸样本图片,为每个人脸样本图片标记对应的绩效等级;A. Prepare a corresponding preset number of face sample pictures for each performance level, and mark a corresponding performance level for each face sample picture;
    B、将各个人脸样本图片进行图片预处理以获得待模型训练的训练图片;B. Perform image preprocessing on each face sample image to obtain a training picture to be trained by the model;
    C、将所有训练图片分为第一比例的训练集、第二比例的验证集;C. Divide all training pictures into a training set of a first ratio and a verification set of a second ratio;
    D、利用所述训练集训练所述预先确定的识别模型;D. training the predetermined recognition model by using the training set;
    E、利用所述验证集验证训练的识别模型的准确率,若准确率大于或者等于预设准确率,则训练结束,或者,若准确率小于预设准确率,则增加各个绩效等级对应的人脸样本图片数量并重新执行上述步骤B、C、D、E。E. Using the verification set to verify the accuracy of the training recognition model, if the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, or if the accuracy rate is less than the preset accuracy rate, the person corresponding to each performance level is increased. The number of face sample pictures and re-execute steps B, C, D, E above.
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有招聘系统,所述招聘系统可被至少一个处理器执行,以使所述至少一个处理器执行如下步骤:A computer readable storage medium, wherein the computer readable storage medium stores a recruitment system, the recruitment system being executable by at least one processor to cause the at least one processor to perform the following steps:
    A2、获取应聘人员的人脸照片;A2. Obtain a photo of the face of the applicant;
    B2、对所述人脸照片利用预先确定的识别模型进行识别,得到所述人脸照片对应的绩效等级,以供招聘人员参考得到的绩效等级对该应聘人员进行招聘;其中,所述预先确定的识别模型为预先通过对标注有不同绩效等级的预设数量人脸样本图片进行训练得到的深度卷积神经网络模型。B2. Identifying the face photo by using a predetermined recognition model, and obtaining a performance level corresponding to the face photo, for the recruiter to refer to the obtained performance level to recruit the candidate; wherein the predetermined The recognition model is a deep convolutional neural network model obtained by training a preset number of face sample images marked with different performance levels in advance.
  16. 如权利要求15所述的计算机可读存储介质,其特征在于,所述招聘系统被所述处理器执行实现所述步骤B2时,还包括:The computer readable storage medium according to claim 15, wherein when the recruiting system is executed by the processor to implement the step B2, the method further comprises:
    从预设数据库中找出各个不同绩效等级对应的员工人脸照片,提 取出各个员工人脸照片中的五官部位,统计各个不同绩效等级对应的五官部位照片中各个器官的预设分类特征分布数量,并根据统计结果确定各个不同绩效等级对应的各个器官绩效分类特征;Find out the photo of the employee's face corresponding to each performance level from the preset database. Taking out the facial features in each employee's face photo, counting the number of preset classification features of each organ in the photos of the facial features corresponding to different performance levels, and determining the performance classification characteristics of each organ corresponding to different performance levels according to the statistical results;
    其中,所述五官部位包括眉毛、眼睛、耳朵、鼻子和嘴巴中的至少一个;所述预设分类特征为眉毛的形状、两个内眼角之间的距离、耳朵的种类、鼻尖与两个嘴角连接形成的角度或上嘴唇的弧度。Wherein the facial features include at least one of an eyebrow, an eye, an ear, a nose and a mouth; the predetermined classification feature is a shape of an eyebrow, a distance between two inner corners of the eye, a type of the ear, a tip of the nose and two corners of the mouth Connect the angle formed or the curvature of the upper lip.
  17. 如权利要求15所述的计算机可读存储介质,其特征在于,所述所述预先确定的识别模型的训练过程如下:The computer readable storage medium of claim 15, wherein the training process of the predetermined recognition model is as follows:
    A、为各个不同绩效等级准备对应的预设数量的五官中各个器官样本图片,为每个器官样本图片标记对应的绩效等级;其中,各个器官样本图片为包含各个不同绩效等级对应的各个器官绩效分类特征的图片;A. Prepare corresponding preset number of images of each organ in the five senses for each performance level, and mark the corresponding performance level for each organ sample image; wherein each organ sample image is a performance of each organ corresponding to each different performance level. a picture of the classification feature;
    B、将各个器官对应的器官样本图片分为第一比例的训练集、第二比例的验证集;B. Dividing the image of the organ sample corresponding to each organ into a training set of the first ratio and a verification set of the second ratio;
    C、利用所述训练集训练各个器官对应的识别模型;C. Training the identification model corresponding to each organ by using the training set;
    D、利用所述验证集验证训练的识别模型的准确率,若所有器官对应的识别模型的准确率大于或者等于预设准确率,则训练结束,或者,若有器官对应的识别模型的准确率小于预设准确率,则增加该器官的各个不同绩效等级对应的器官样本图片数量并重新执行上述步骤B、C、D。D. Using the verification set to verify the accuracy of the training recognition model, if the accuracy of the recognition model corresponding to all organs is greater than or equal to the preset accuracy rate, the training ends, or if the accuracy of the identification model corresponding to the organ is If it is less than the preset accuracy rate, the number of organ sample pictures corresponding to different performance levels of the organ is increased and the above steps B, C, and D are re-executed.
  18. 如权利要求16所述的计算机可读存储介质,其特征在于,所述所述预先确定的识别模型的训练过程如下:The computer readable storage medium of claim 16, wherein the training process of the predetermined recognition model is as follows:
    A、为各个不同绩效等级准备对应的预设数量的五官中各个器官样本图片,为每个器官样本图片标记对应的绩效等级;其中,各个器官样本图片为包含各个不同绩效等级对应的各个器官绩效分类特征的图片;A. Prepare corresponding preset number of images of each organ in the five senses for each performance level, and mark the corresponding performance level for each organ sample image; wherein each organ sample image is a performance of each organ corresponding to each different performance level. a picture of the classification feature;
    B、将各个器官对应的器官样本图片分为第一比例的训练集、第二比例的验证集;B. Dividing the image of the organ sample corresponding to each organ into a training set of the first ratio and a verification set of the second ratio;
    C、利用所述训练集训练各个器官对应的识别模型;C. Training the identification model corresponding to each organ by using the training set;
    D、利用所述验证集验证训练的识别模型的准确率,若所有器官对应的识别模型的准确率大于或者等于预设准确率,则训练结束,或者,若有器官对应的识别模型的准确率小于预设准确率,则增加该器官的各个不同绩效等级对应的器官样本图片数量并重新执行上述步骤B、C、D。D. Using the verification set to verify the accuracy of the training recognition model, if the accuracy of the recognition model corresponding to all organs is greater than or equal to the preset accuracy rate, the training ends, or if the accuracy of the identification model corresponding to the organ is If it is less than the preset accuracy rate, the number of organ sample pictures corresponding to different performance levels of the organ is increased and the above steps B, C, and D are re-executed.
  19. 如权利要求17所述的计算机可读存储介质,其特征在于, 所述招聘系统被所述处理器执行实现所述步骤B2时,还包括:The computer readable storage medium of claim 17 wherein: When the recruiting system is executed by the processor to implement the step B2, the method further includes:
    提取出所述人脸照片中的五官部位;Extracting the facial features in the photo of the face;
    利用训练好的各个器官对应的识别模型识别出提取的五官部位中各器官对应的绩效等级;Identifying the performance level corresponding to each organ in the extracted facial features by using the trained recognition model of each organ;
    根据预设的不同器官权重,以及识别出的各器官对应的绩效等级计算所述人脸照片对应的绩效等级M的公式为:The formula for calculating the performance level M corresponding to the face photo according to the preset different organ weights and the corresponding performance levels of the identified organs is:
    M=a*M1+b*M2+c*M3+d*M4+e*M5M=a*M1+b*M2+c*M3+d*M4+e*M5
    其中,a为眉毛对应的绩效等级M1的权重系数,b为眼睛对应的绩效等级M2的权重系数,c为耳朵对应的绩效等级M3的权重系数,d为鼻子对应的绩效等级M4的权重系数,e为嘴巴对应的绩效等级M5的权重系数。Where a is the weighting coefficient of the performance level M1 corresponding to the eyebrow, b is the weighting coefficient of the performance level M2 corresponding to the eye, c is the weighting coefficient of the performance level M3 corresponding to the ear, and d is the weighting coefficient of the performance level M4 corresponding to the nose, e is the weighting factor of the performance level M5 corresponding to the mouth.
  20. 如权利要求18所述的计算机可读存储介质,其特征在于,所述招聘系统被所述处理器执行实现所述步骤B2时,还包括:The computer readable storage medium according to claim 18, wherein when the recruiting system is executed by the processor to implement the step B2, the method further comprises:
    提取出所述人脸照片中的五官部位;Extracting the facial features in the photo of the face;
    利用训练好的各个器官对应的识别模型识别出提取的五官部位中各器官对应的绩效等级;Identifying the performance level corresponding to each organ in the extracted facial features by using the trained recognition model of each organ;
    根据预设的不同器官权重,以及识别出的各器官对应的绩效等级计算所述人脸照片对应的绩效等级M的公式为:The formula for calculating the performance level M corresponding to the face photo according to the preset different organ weights and the corresponding performance levels of the identified organs is:
    M=a*M1+b*M2+c*M3+d*M4+e*M5M=a*M1+b*M2+c*M3+d*M4+e*M5
    其中,a为眉毛对应的绩效等级M1的权重系数,b为眼睛对应的绩效等级M2的权重系数,c为耳朵对应的绩效等级M3的权重系数,d为鼻子对应的绩效等级M4的权重系数,e为嘴巴对应的绩效等级M5的权重系数。 Where a is the weighting coefficient of the performance level M1 corresponding to the eyebrow, b is the weighting coefficient of the performance level M2 corresponding to the eye, c is the weighting coefficient of the performance level M3 corresponding to the ear, and d is the weighting coefficient of the performance level M4 corresponding to the nose, e is the weighting factor of the performance level M5 corresponding to the mouth.
PCT/CN2017/108764 2017-09-30 2017-10-31 Recruitment method, electronic device, and readable storage medium WO2019061660A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710916039.1 2017-09-30
CN201710916039.1A CN107784482A (en) 2017-09-30 2017-09-30 Recruitment methods, electronic installation and readable storage medium storing program for executing

Publications (1)

Publication Number Publication Date
WO2019061660A1 true WO2019061660A1 (en) 2019-04-04

Family

ID=61433681

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/108764 WO2019061660A1 (en) 2017-09-30 2017-10-31 Recruitment method, electronic device, and readable storage medium

Country Status (2)

Country Link
CN (1) CN107784482A (en)
WO (1) WO2019061660A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875590A (en) * 2018-05-25 2018-11-23 平安科技(深圳)有限公司 BMI prediction technique, device, computer equipment and storage medium
CN109308565B (en) * 2018-08-01 2024-03-19 平安科技(深圳)有限公司 Crowd performance grade identification method and device, storage medium and computer equipment
CN111680597B (en) * 2020-05-29 2023-09-01 北京百度网讯科技有限公司 Face recognition model processing method, device, equipment and storage medium
CN113240390A (en) * 2021-05-14 2021-08-10 广州红海云计算股份有限公司 Internet-based intelligent human resource management method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080187186A1 (en) * 2007-02-02 2008-08-07 Sony Corporation Image processing apparatus, image processing method and computer program
CN105320945A (en) * 2015-10-30 2016-02-10 小米科技有限责任公司 Image classification method and apparatus
CN107045618A (en) * 2016-02-05 2017-08-15 北京陌上花科技有限公司 A kind of facial expression recognizing method and device
CN107169455A (en) * 2017-05-16 2017-09-15 中山大学 Face character recognition methods based on depth local feature

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103065276A (en) * 2012-05-17 2013-04-24 刘学勇 Characteristic quantitative method
CN104504376A (en) * 2014-12-22 2015-04-08 厦门美图之家科技有限公司 Age classification method and system for face images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080187186A1 (en) * 2007-02-02 2008-08-07 Sony Corporation Image processing apparatus, image processing method and computer program
CN105320945A (en) * 2015-10-30 2016-02-10 小米科技有限责任公司 Image classification method and apparatus
CN107045618A (en) * 2016-02-05 2017-08-15 北京陌上花科技有限公司 A kind of facial expression recognizing method and device
CN107169455A (en) * 2017-05-16 2017-09-15 中山大学 Face character recognition methods based on depth local feature

Also Published As

Publication number Publication date
CN107784482A (en) 2018-03-09

Similar Documents

Publication Publication Date Title
US10755084B2 (en) Face authentication to mitigate spoofing
WO2019109526A1 (en) Method and device for age recognition of face image, storage medium
US10509985B2 (en) Method and apparatus for security inspection
WO2019238063A1 (en) Text detection and analysis method and apparatus, and device
WO2019119505A1 (en) Face recognition method and device, computer device and storage medium
US9639769B2 (en) Liveness detection
WO2019174130A1 (en) Bill recognition method, server, and computer readable storage medium
US9839376B1 (en) Systems and methods for automated body mass index calculation to determine value
WO2018028546A1 (en) Key point positioning method, terminal, and computer storage medium
WO2017016240A1 (en) Banknote serial number identification method
WO2019061660A1 (en) Recruitment method, electronic device, and readable storage medium
WO2019062080A1 (en) Identity recognition method, electronic device, and computer readable storage medium
US10748217B1 (en) Systems and methods for automated body mass index calculation
CN108717663B (en) Facial tag fraud judging method, device, equipment and medium based on micro expression
WO2019071660A1 (en) Bill information identification method, electronic device, and readable storage medium
CN108090830B (en) Credit risk rating method and device based on facial portrait
WO2019174131A1 (en) Identity authentication method, server, and computer readable storage medium
WO2019071738A1 (en) Examinee identity authentication method and apparatus, readable storage medium and terminal device
CN107679475B (en) Store monitoring and evaluating method and device and storage medium
WO2012132418A1 (en) Characteristic estimation device
CN103503000A (en) Facial recognition
WO2018072028A1 (en) Face authentication to mitigate spoofing
CN111785384A (en) Abnormal data identification method based on artificial intelligence and related equipment
WO2021139316A1 (en) Method and apparatus for establishing expression recognition model, and computer device and storage medium
CN100371945C (en) Computer assisted calligraphic works distinguishing method between true and false

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17926739

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23/09/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17926739

Country of ref document: EP

Kind code of ref document: A1