WO2020093614A1 - 面试人员的性格预测方法、装置及计算机可读存储介质 - Google Patents

面试人员的性格预测方法、装置及计算机可读存储介质 Download PDF

Info

Publication number
WO2020093614A1
WO2020093614A1 PCT/CN2019/073939 CN2019073939W WO2020093614A1 WO 2020093614 A1 WO2020093614 A1 WO 2020093614A1 CN 2019073939 W CN2019073939 W CN 2019073939W WO 2020093614 A1 WO2020093614 A1 WO 2020093614A1
Authority
WO
WIPO (PCT)
Prior art keywords
personality
interviewer
interview
face
face image
Prior art date
Application number
PCT/CN2019/073939
Other languages
English (en)
French (fr)
Inventor
朱昱锦
徐国强
邱寒
Original Assignee
深圳壹账通智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳壹账通智能科技有限公司 filed Critical 深圳壹账通智能科技有限公司
Publication of WO2020093614A1 publication Critical patent/WO2020093614A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/117Identification of persons
    • A61B5/1171Identification of persons based on the shapes or appearances of their bodies or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates

Definitions

  • the present application relates to the field of artificial intelligence technology, and in particular, to an interviewer's personality prediction method, device, and computer non-volatile readable storage medium.
  • the interviewer In the current interview process, the interviewer usually reviews the candidate ’s resume and prepares the interview questions in combination with the job position; afterwards, the interviewer invites the applicant to introduce himself, the interviewer asks questions, and the candidate answers, the interviewer may Supplementary questions will be asked based on the interviewer's answers. Candidates can also ask the interviewer to introduce the company's basic situation. Finally, the interviewer gives an evaluation report, and the qualified person enters the next link. According to the existing interview process, the interviewer can use the resume information and the candidate's answer to professional questions, combined with previous experience and data, to roughly assess the candidate's ability, or judge the personality tendency of the candidate according to the candidate's answer to other questions.
  • the present application provides an interviewer's personality prediction method, device, and computer non-volatile readable storage medium.
  • the main purpose is to solve the problem of difficulty in obtaining interviewer's personality characteristic information during the interview process in the prior art.
  • a method for personality prediction of an interviewer includes: acquiring a plurality of face sample images carrying character evaluation result tags; and inputting the face sample images to a convolutional neural network for training Build a personality prediction model, which records the mapping relationship between face images and personality evaluation results; input the face images of interviewers into the personality prediction model to predict the personality evaluation results corresponding to the interviewers .
  • a personality prediction device for an interviewer.
  • the device includes: an acquisition unit for acquiring a plurality of face sample images carrying a personality evaluation result label; and a construction unit for applying the acquisition
  • the face sample images obtained by the unit are input to the convolutional neural network for training to construct a personality prediction model, which records the mapping relationship between the face image and the personality evaluation results; the prediction unit is used to convert the face of the interviewer
  • the image is input to the personality prediction model constructed by the construction unit to predict the personality evaluation result corresponding to the interviewer.
  • a non-volatile computer-readable storage medium on which computer-readable instructions are stored.
  • the computer-readable instructions are executed by a processor, the following steps are achieved: acquiring multiple copies Face sample images of personality evaluation result tags; input the face sample images into the convolutional neural network for training to construct a personality prediction model, and the personality prediction model records the mapping relationship between the face image and the personality evaluation results; The face image of the interviewer is input to the personality prediction model to predict the personality evaluation result corresponding to the interviewer.
  • a computer device including a memory, a processor, and computer-readable instructions stored on the memory and executable on the processor, the processor executing the computer-readable instructions
  • the following steps are implemented: acquiring multiple face sample images carrying character evaluation result tags; inputting the face sample images into a convolutional neural network for training to construct a character prediction model, and recording face images in the character prediction model The mapping relationship with the personality evaluation result; input the face image of the interviewer into the personality prediction model to predict the personality evaluation result corresponding to the interviewer.
  • a personality prediction method and device for interviewers provided by the present application, input face sample images to a convolutional neural network for training to construct a personality prediction model, and the personality prediction model records facial images and The mapping relationship of the personality evaluation results can predict the interviewer's evaluation results in various personality directions through the personality prediction model.
  • the interviewer only talks with the interviewer and combines the previous experience and data to determine the interviewer's
  • the embodiment of the present application predicts the interviewer's evaluation results in various personality directions by constructing a personality prediction model, and provides the interviewer with more information about the interviewer's personality traits, which helps the interviewer in all aspects. Measure the matching degree of the interviewer with the interview post and the interview post team to facilitate the company's selection of the required talents.
  • FIG. 1 shows a schematic flowchart of a method for predicting personality of an interviewer provided by an embodiment of the present application
  • FIG. 2 shows a schematic flowchart of another personality prediction method for interviewers according to an embodiment of the present application
  • FIG. 3 shows a schematic flow chart of measuring the match between an interviewer and an interview post team provided by an embodiment of the present application
  • FIG. 4 shows a schematic flowchart of another personality prediction method for interviewers according to an embodiment of the present application
  • FIG. 5 shows a schematic structural diagram of a personality prediction device for interviewers according to an embodiment of the present application
  • FIG. 6 shows a schematic structural diagram of another personality prediction device for interviewers provided by an embodiment of the present application.
  • FIG. 7 shows a schematic structural diagram of yet another personality prediction device for interviewers provided by an embodiment of the present application.
  • Embodiments of the present application provide a method for personality prediction of interviewers, which can achieve the purpose of dynamically configuring personality prediction information for interviewers. As shown in FIG. 1, the method includes:
  • the face sample images can be the employee face images provided by the recruitment company, because the recruitment company will establish an employee file when recruiting employees, in the employee file
  • the face images of the employees are stored.
  • the face images of different employees can be retrieved from the employee archives of the recruitment company, and the face sample images carry the evaluation results of different personality directions that need to be predicted.
  • the embodiment of the present application does not limit the division of the personality direction of the interviewer.
  • 16 cartel personality factors can be selected as the personality direction division of the interviewer, or other methods can be selected for the interviewer.
  • the personality direction is divided, for example, the interviewee's personality direction is divided by the interviewer's reason, emotion, will, introversion, extroversion, independence and other personality characteristics.
  • the personality prediction model records the mapping relationship between the face image and the personality evaluation result; because the face sample image contains different In order to distinguish the facial sample images of different personality evaluation results, the facial sample images are marked according to the evaluation results of different personality directions.
  • the convolutional neural network model is a network structure that can train the mapping relationship between the face image and the personality evaluation result.
  • the network structure is equivalent to a personality prediction model, and the personality of the interviewer is predicted by the personality prediction model.
  • the convolutional neural network model can be a VGG-16 network structure.
  • VGG-16 contains 5 stages, a fully connected layer and a classification layer, and each stage contains a maximum pooling layer and multiple convolutional layers, each layer The number of kernels starts from 64 in the first stage, and each stage doubles to 512.
  • the entire convolutional neural network VGG-16 uses the same size of the convolution kernel size 3 * 3 and the maximum pooling size 2 * 2.
  • the personality prediction model can predict the evaluation results of interviewers in different personality directions.
  • the division of personality direction and the setting of evaluation results are not limited in this embodiment of the present application.
  • the personality evaluation results of the interviewer can be the evaluation results of the interviewer in each personality direction, and the evaluation scores can be set for the evaluation results of each personality direction.
  • the evaluation can also be set for the evaluation results of each personality direction Level, input the face image of the interviewer into the personality prediction model, and predict the score of the interviewer in each personality direction.
  • the face sample images are input to the convolutional neural network to construct a personality prediction model.
  • the personality prediction model records the mapping relationship between the face image and the personality evaluation results.
  • the personality prediction model can predict that the interviewer is different.
  • the personality evaluation results in the personality direction are compared with the method in the prior art where the interviewer only talks to the interviewer and combines the previous experience and data to judge the personality characteristics of the interviewer.
  • the embodiment of the present application predicts by constructing a personality prediction model
  • the interviewer ’s personality evaluation results in different personality directions provide the interviewer with more information about the interviewer ’s personality characteristics, which helps the interviewer to comprehensively measure the match between the interviewer and the recruitment position and recruitment team, so that the company can Selection of talents is required.
  • the embodiments of the present application provide another personality prediction method for interviewers, which can achieve the objective of personality prediction for interviewers. As shown in FIG. 2, the method includes:
  • the face image library contains face images with different personality evaluation results; wherein, the face image library is created by the recruitment company when recruiting employees Employee file library, the employee file database contains the employee's face images and the evaluation results of the personality test at the time of employment, and each employee's face image corresponds to the unique personality evaluation result. For example, 100 employee face images and the personality evaluation results corresponding to the employee face images are taken as training samples from the employee's archive. This application does not limit the number of face images obtained. It should be noted that To ensure the accuracy of the subsequent model establishment, the number of face images acquired should not be too small.
  • the personality evaluation results may be the results of the personality test performed by the employee upon entry
  • the personality evaluation results record the evaluation results of the employees in different personality directions.
  • the personality evaluation results may be in the form of employee evaluation scores or evaluation levels in different personality directions.
  • the cartel's 16 personality factors may be used to divide the personality direction of the employee, or the personality direction of the employee may be divided in other ways, which is not limited in this application.
  • the specific label can be obtained manually or through the software LambleTool Employees ’face images are tagged to obtain face images with different personality evaluation results tags.
  • the open source network model can be the MTCNN network model. Face detection is carried out with face images of different personality evaluation results.
  • the open face network model MTCNN is used to perform background filtering on the obtained face image to detect the face, and multiple face sample images with different personality directions are obtained.
  • the MTCNN network model is composed of P-Net network structure, R-Net network structure and O-Net network structure.
  • the P-Net network structure obtains the candidate window of the face area and the bounding box regression vector of the employee's face image, and calibrates the candidate window of the face area through the bounding box; the R-Net network structure uses the P-Net network structure The obtained bounding box regression vector and non-maximum suppression method remove the misjudgment of the face area and the candidate window with highly overlapping face area; the function of the O-Net network structure is similar to that of the R-Net network structure, through the O-Net network The structure outputs landmarks on the face area to detect faces.
  • the convolutional neural network model may include a multi-layer structure. Specifically, the convolutional layer of the convolutional neural network may be used to extract the characteristic parameters of the face sample image in each personality direction.
  • the convolutional layer here can be divided into multiple In each stage, multiple convolutional layers are set, and the feature parameters of the face sample image in each personality direction are summarized through the fully connected layer of the convolutional neural network model to obtain the multi-dimensional face image in each personality direction. Feature parameters, through the pooling layer of the convolutional neural network, the feature parameters of the multi-dimensional face image in each personality direction are reduced, and the weight vector of the face image in each personality direction is obtained.
  • the classification layer generates the personality evaluation results of the face image in each personality direction according to the weight vector of the face image in each personality direction, and constructs a personality prediction model, which outputs the face image for each input face image Character evaluation results in various character directions.
  • the personality prediction model Input the face image of the interviewer to the personality prediction model to predict the personality evaluation result corresponding to the interviewer; where the interviewer is a user of unknown personality, the personality evaluation result can be the interviewer in each personality direction
  • This application does not limit the number of evaluation scores or evaluation levels.
  • the interviewer's face image is input into the constructed personality prediction model to predict the interviewer's evaluation results in various personality directions.
  • the personality of the interviewer is divided into 16 directions through the cartel's 16 personality factors.
  • the interviewer's score on each personality factor is 0-100, and the face of the interviewer is classified through the classification layer of the personality prediction model.
  • the image is divided into 16 parts. Each part represents the probability that the face image of the interviewer belongs to this type of personality factor.
  • the 16 parts of the output are respectively connected to a fully connected layer with a node of 10.
  • the output of the fully connected layer represents the Is the probability of the interviewer ’s face image in each score interval of the current personality factor, and then calculates the output probability multiplied by the score interval score, and finally obtains the score of the interviewer ’s face image on 16 personality factors, specifically .
  • the probability in each scoring interval is: 0.1, 0.2, 0.5, 0.2, 0, 0, 0, 0, 0, 0, it should be noted that the probability sum of each scoring interval must be 1.
  • the score of each scoring interval is 90,80,70,60,50,40,30,20,10,0 in order.
  • the interviewee's score on this personality factor is:
  • the interviewer's score on this personality factor is 72 points.
  • the interviewee's score on other personality factors can be obtained.
  • the interview position can be divided into multiple categories in advance, such as technology-no excessive communication (development, research, security and other positions), technology-need communication (Architecture, project and other positions), non-technical-do not need too much communication (logistics, operations, finance and other positions), non-technical-need to communicate (products, pre-sales, retail and other positions), select a preset number for different job categories
  • the character direction of the number is used as the basis for the investigation of this type of post.
  • the preset number of personality factors can be selected from the 16 types of cartel personality factors as the basis for the interview post and set for the preset number of personality factors Matching score or inspection level, the specific form of the inspection is not limited by this application.
  • the personality evaluation results of the interviewer include the evaluation results of the personality direction of the interview position interviewed by the interviewer.
  • the personality orientation of the interview position interviewed by the interviewer is cartel sex factor, smartness factor, stability factor, strongness factor, worry factor, independence factor, self-discipline factor among the 16 personality factors of cartel.
  • the interviewer's face image can be input into the personality prediction model to predict the interviewer's corresponding
  • the following steps 209 and 210 are executed, so that interviewers who are more in line with the interview job team can be recruited. It should be noted that after step 205 is performed, the matching degree between the interviewer and the interview post team needs to be measured.
  • the execution process of specific steps 209 and 210 is not limited to execution after steps S206 to 208, as shown in FIG. 3,
  • the specific measurement of the match between the interviewer and the interview post team includes the following steps:
  • the face images of the sample persons in each position in the interview post team are obtained, and the face images of the sample persons in each post in the interview post team are input into the personality prediction model to predict the sample of each post in the interview post team.
  • the evaluation results of the personnel in each personality direction for example, the personality direction of the sample personnel in each position of the interview job team is divided into 16 directions through the 16 personality factors of the cartel, and the evaluation score of the sample personnel on each personality factor is 0- 100 points, through the personality prediction model, get the scores of the 16 personality factors of the sample personnel of each position in the interview job team.
  • the interview post team is determined to be in a surplus situation in a certain personality direction.
  • the surplus situation indicates that There are sufficient personnel in the personality direction of the interview post team, and there is no need to allocate more people in the personality direction; if the average distribution of the evaluation results of the sample personnel in a certain personality direction does not reach the pre-set evaluation standard that meets the interview post team, It is determined that the interview post team is scarce in a certain personality direction.
  • the scarcity indicates that there is a lack of personnel in this interview direction team in this personality direction, and personnel can be recruited in this personality direction.
  • the interview post team is determined to be in a certain personality The situation in the direction Interviewer and interviewee to measure the degree of matching job interview team, which recruit more consistent job interview team interview staff.
  • the sample staff of the interview post team is 5 people.
  • the personality prediction model we know that each sample person's score on the 16 personality factors is calculated.
  • the measured scores on the factors should be added together to find their average value, and the average score of the team sample person on each personality factor is obtained.
  • the average score of the team sample person on 16 personality factors is followed by a group factor of 70 points, smartness Factor 79 points, stability factor 66 points, strong factor 75 points, excitability factor 88 points, there are constant factors 82 points, daring factor 55 points, sensitivity factor 77 points, suspicion factor 75 points, fantasy Factor 72 points, accident factor 74 points, worry factor 70 points, experimental factor 80 points, independence factor 50 points, self-discipline factor 52 points, tension factor 70 points, the interview post team is pre-set on each personality factor
  • the evaluation criteria are 70 points, and through comparison, we know that the team ’s sample personnel have not reached the evaluation criteria in the stability factor, independence factor, self-discipline factor, and dare to sex factor.
  • the interview team is scarce in terms of independence factor, self-discipline factor, daring factor, and stability factor, which means that the interview post team needs interviewers with more than one personality direction; similarly, the sample of the team If the evaluation scores of the personnel on other personality factors reach the evaluation standard, the interview post team is in excess of other personality factors, which means that the interview post team does not need the interviewers of the above other personality directions.
  • the face sample images are input to the convolutional neural network to construct a personality prediction model.
  • the personality prediction model records the mapping relationship between the face image and the personality evaluation results.
  • the personality prediction model can predict that the interviewer is different.
  • the personality evaluation results in the personality direction are compared with the method in the prior art where the interviewer only talks to the interviewer and combines the previous experience and data to judge the personality characteristics of the interviewer.
  • the embodiment of the present application predicts by constructing a personality prediction model Interviewer's personality evaluation results in different personality directions provide the interviewer with more information about the interviewer's personality characteristics, and compare the interviewer's evaluation results and interview positions with the evaluation criteria set by the team to enable the interviewer It can comprehensively measure the matching degree of interviewers with recruitment positions and recruitment teams, which is convenient for the company to select the required talents.
  • the embodiments of the present application provide yet another method of personality prediction for interviewers, which can achieve the purpose of personality prediction for interviewers. As shown in FIG. 4, the method includes:
  • 301 Acquire a plurality of face sample images carrying a personality evaluation result label; for the embodiment of the present application, a specific implementation method of acquiring a face sample image is the same as step 101, and details are not described herein again.
  • the evaluation results of different personality directions may be the scores of the interviewers in different personality directions or the evaluation levels of the interviewers in different personality directions, which are not limited in the embodiments of the present application.
  • the personality prediction model records the mapping relationship between the face image and the personality evaluation result
  • the convolutional neural network model can not only use the VGG-16 model, the VGG-16 model includes 5 stages, a fully connected layer and a classification layer, each stage includes a maximum pooling layer and multiple convolutions The number of convolution kernels in each layer starts from 64 in the first stage, and each stage doubles to 512.
  • the entire convolutional neural network VGG-16 uses the same size of convolution kernel size 3 * 3 And the maximum pooling size is 2 * 2.
  • the specific training process using the VGG-16 model may include, but is not limited to, the following implementation method: extract the feature parameters of the face sample image in each personality direction through the convolutional layer of each stage of VGG-16, the face After the sample image is input to the VGG-16 model, the dimension of the output image becomes 64 after the first stage of the convolution layer, and the dimension of the output image becomes 128 after the second stage of the convolution layer, after the third stage After the convolutional layers in the fourth and fifth stages, the dimensions of the output image are 256, 512, and 512, respectively; the fully connected layer in the VGG16 model is located after the fifth stage, and the summary is described through the fully connected layer in the VGG16 model
  • the feature parameters of the face sample image in each personality direction to obtain the feature parameters of the multi-dimensional face image in each personality direction;
  • the VGG-16 model contains 5 pooling layers, which are located in each stage of the convolution layer Behind, that is, 5 pooling layers are located at the end of each stage, through the pooling layer of the V
  • the personality prediction model outputs the personality evaluation results in each personality direction corresponding to the interviewer for each input face image of the interviewer.
  • the personality of the interviewer is divided into 16 directions through the cartel's 16 personality factors, that is, the output of the personality prediction model is divided into 16 categories, and the face image of the interviewer is input to the constructed personality prediction model to predict the interviewer Evaluation results in 16 personality directions.
  • the interviewer matches the interview post; for example, 16 types of cartel
  • the independence factor, self-discipline factor, courage factor, and stability factor of the personality factor are the personality orientation of the interview position, and the investigation scores are set for the four investigation factors of the interview position, all of which are 70 points.
  • the test scores of the personnel on the independence factor, self-discipline factor, daring behavior factor, and stability factor are all higher than the preset investigation score of 70 points corresponding to the four investigation factors, and the interviewer is determined to match the interview position.
  • step 306b If the evaluation result of the interviewer in the direction of the personality of the interview post does not reach the pre-set evaluation criteria that meet the interview post, it is determined that the interviewer and the interview post are not Match; for example, the independence factor, self-discipline factor, courage factor, stability factor of the personality of the 16 personality factors of the cartel, and the stability factor of the interview character of the interview position, and the investigation scores for the 4 investigation factors of the interview position.
  • the face sample images are input to the convolutional neural network to construct a personality prediction model.
  • the personality prediction model records the mapping relationship between the face image and the personality evaluation results.
  • the personality prediction model can predict that the interviewer is different.
  • the personality evaluation results in the personality direction are compared with the method in the prior art where the interviewer only talks to the interviewer and combines the previous experience and data to judge the personality characteristics of the interviewer.
  • the embodiment of the present application predicts by constructing a personality prediction model Interviewer's personality evaluation results in different personality directions provide the interviewer with more information about the interviewer's personality characteristics, and compare the interviewer's evaluation results and interview positions with the evaluation criteria set by the team to enable the interviewer It can comprehensively measure the matching degree of interviewers with recruitment positions and recruitment teams, which is convenient for the company to select the required talents.
  • an embodiment of the present application provides a personality prediction device for an interviewer.
  • the device includes: an acquisition unit 41, a construction unit 42, and a prediction unit 43.
  • the obtaining unit 41 can be used to obtain a plurality of face sample images carrying personality evaluation result tags; the construction unit 42 can be used to input the face sample images obtained by the obtaining unit into a convolutional neural network for training to construct a personality A prediction model, which records the mapping relationship between the face image and the personality evaluation result in the personality prediction model; the prediction unit 43 can be used to input the face image of the interviewer to the personality prediction model constructed by the construction unit to predict Describe the personality evaluation results of the interviewer.
  • the personality prediction device for interviewers provided in the embodiments of the present application first inputs the face sample image to the convolutional neural network for training to construct a personality prediction model, which records the mapping relationship between the face image and the personality evaluation result in the personality prediction model.
  • the personality prediction model can then predict the interviewer's evaluation results in various personality directions, compared with the method in the prior art where the interviewer only talks with the interviewer and combines the previous experience and data to determine the personality characteristics of the interviewer.
  • the interviewer is provided with more information about the interviewer's personality characteristics, which helps the interviewer to comprehensively measure the interviewer and the interview position.
  • the matching degree with the interview post team is convenient for the company to select the required talents.
  • FIG. 6 is a schematic structural diagram of another personality prediction device for interviewers according to an embodiment of the present application. As shown in FIG. 6, the device further includes:
  • the setting unit 44 can be used to set the personality direction of the interview post investigation according to the job skills of the interview position interviewed by the interviewer; the extraction unit 45 can be used to extract the interviewer ’s presence from the personality evaluation result corresponding to the interviewer Describe the evaluation results of the personality direction of the interview post inspection;
  • the determining unit 46 may be used to determine the match between the interviewer and the interview position based on the evaluation results of the interviewer extracted by the extraction unit in the direction of the personality of the interview post.
  • the device further includes: a comparison unit 47 and the acquisition unit 41, which can also be used to acquire personality assessments corresponding to the sample personnel of each post in the interview post team based on the expected post composition of the post team interviewed by the interviewer Results;
  • the comparison unit 47 can be used to compare the average distribution of the evaluation results of the sample personnel in each personality direction with the pre-set evaluation criteria that meet the interview post team to obtain the interview post team in each personality direction The situation of personnel requirements.
  • the comparison unit 47 may be specifically configured to determine the interview post team if the average distribution of the evaluation results of the sample personnel in a certain personality direction reaches the pre-set evaluation criteria that meet the interview post team There is a surplus situation in a certain personality direction; the comparison unit 47 can also be specifically used if the average distribution of the evaluation results of the sample personnel in a certain personality direction does not reach the evaluation of the pre-set interview job team Standard, it is determined that the interview job team is scarce in a certain personality direction.
  • the obtaining unit 41 includes: an obtaining module 411, which can be used to obtain face images of various personality directions in the face image library, and the face image library contains face images with different personality evaluation results; filtering The module 413 can be used for performing background filtering on the face images of each personality direction acquired by the acquisition module using an open source network model to obtain multiple face sample images of different personality directions.
  • the acquiring unit 41 further includes: a marking module 412, which can be used to mark face images according to the personality evaluation results in the face image library to obtain face images carrying different personality evaluation result tags.
  • the construction unit 42 includes: an extraction module 421, which can be used to extract the feature parameters of the face sample image in each personality direction through the convolution layer of the convolutional neural network model; the summary module 422, can It is used to summarize the feature parameters of the face sample image in each personality direction through the fully connected layer of the convolutional neural network model to obtain the feature parameters of the multi-dimensional face image in each personality direction; the dimensionality reduction module 423, It can be used to perform dimensionality reduction on the feature parameters of the multi-dimensional face image in each personality direction through the pooling layer of the convolutional neural network model to obtain the weight vector of the face image in each personality direction; generate The module 424 can be used to generate a personality evaluation result of the face image in each personality direction according to the weight vector of the face image in each personality direction through the classification layer of the convolutional neural network model, and construct a personality prediction model.
  • an extraction module 421 which can be used to extract the feature parameters of the face sample image in each personality direction through the convolution layer of the convolutional neural network model
  • FIG. 7 is a schematic structural diagram of another personality prediction device for interviewers according to an embodiment of the present application.
  • the determining unit 46 includes a determination Module 461 may be used to determine that the interviewer matches the interview post if the evaluation result of the interviewer in the direction of the personality of the interview post meets the pre-set evaluation criteria that meet the interview post;
  • the determination module 461 can also be used to determine that the interviewer does not match the interview position if the evaluation result of the interviewer in the direction of the personality of the interview post does not reach the pre-set evaluation criteria that meet the interview post .
  • the determination unit 46 further includes: a selection module 462, which can be used to randomly select interview questions in the personality direction of the interview post investigation from the underlying database, and to interview the interviewer according to the interview questions In the bottom database, interview questions in each personality direction are pre-stored.
  • a selection module 462 which can be used to randomly select interview questions in the personality direction of the interview post investigation from the underlying database, and to interview the interviewer according to the interview questions In the bottom database, interview questions in each personality direction are pre-stored.
  • this embodiment also provides a non-volatile readable storage medium on which computer-readable instructions are stored, and when the readable instructions are executed by the processor Realize the personality prediction method for interviewers as shown in Figures 1 to 4 above.
  • the technical solution of the present application can be embodied in the form of a software product, which can be stored in a non-volatile readable storage medium (can be a CD-ROM, U disk, mobile hard disk, etc.), Several instructions are included to enable a computer device (which may be a personal computer, server, or network device, etc.) to execute the methods described in various implementation scenarios of this application.
  • a computer device which may be a personal computer, server, or network device, etc.
  • embodiments of the present application also provide a computer device, which may specifically be a personal computer, Servers, network devices, etc.
  • the physical device includes a non-volatile readable storage medium and a processor; a non-volatile readable storage medium is used to store computer-readable instructions; a processor is used to execute computer-readable instructions to Realize the personality prediction method for interviewers as shown in Figures 1 to 4 above.
  • the computer device may further include a user interface, a network interface, a camera, a radio frequency (Radio Frequency) circuit, a sensor, an audio circuit, a WI-FI module, and so on.
  • the user interface may include a display (Display), an input unit such as a keyboard, and the like, and the optional user interface may also include a USB interface, a card reader interface, and the like.
  • the network interface may optionally include a standard wired interface, a wireless interface (such as a Bluetooth interface, and a WI-FI interface).
  • the physical device structure of the personality prediction of the interviewer does not constitute a limitation on the physical device, and may include more or fewer components, or a combination of certain components, or different Parts layout.
  • the non-volatile readable storage medium may also include an operating system and a network communication module.
  • the operating system is a program that manages the hardware and software resources of the computer equipment described above, and supports the operation of information processing programs and other software and / or programs.
  • the network communication module is used to implement communication between the components inside the non-volatile readable storage medium, and to communicate with other hardware and software in the physical device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Surgery (AREA)
  • Computer Security & Cryptography (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Stored Programmes (AREA)
  • Image Analysis (AREA)

Abstract

一种面试人员的性格预测方法、装置及计算机非易失性可读存储介质,可以为面试官提供更多关于面试人员性格特征的信息。该方法包括:获取多张携带性格测评结果标签的人脸样本图像;将所述人脸样本图像输入至卷积神经网络进行训练,构建性格预测模型,所述性格预测模型中记录有人脸图像与性格测评结果的映射关系;将面试人员的人脸图像输入至所述性格预测模型,预测出面试人员对应的性格测评结果。

Description

面试人员的性格预测方法、装置及计算机可读存储介质
本申请要求于2018年11月9日提交中国专利局、申请号为2018113313479、申请名称为“面试人员的性格预测方法、装置、计算机设备及计算机存储介质”的中国专利申请的优先权,其全部内容通过引用结合在申请中。
技术领域
本申请涉及人工智能技术领域,特别是涉及一种面试人员的性格预测方法、装置及计算机非易失性可读存储介质。
背景技术
人才招聘是企业发展过程中不可缺少的一个重要环节。特别是具有人才需求的大企业,需要面试官能快速,准确的了解面试者,这样才能有效的甄选适合公司招聘要求的人才。
在现有的面试过程中,面试官通常会先审阅应聘者简历,并结合应聘岗位拟定面试问题;之后,面试官请应聘者作自我介绍,面试官提出问题,由应聘者回答,面试官可能会针对面试者的回答提出补充问题,应聘者也可以请面试官对公司的基本情况进行介绍;最后,面试官给出评估报告,合格者进入下一环节。根据现有的面试流程,面试官可以通过简历信息与应聘者对专业问题的回答情况,并结合以往经验和数据,大致评估应聘者的能力,或者根据应聘者对其他问题的回答判断其性格倾向,甚至可以根据应聘者的衣着、举止、表情、谈吐了解其人格特质。可以发现在现有的招聘面试流程中,并没有大范围使用自动化技术,还是以技术主管或人力资源负责人与应聘者面谈为主。
但在实践中,一方面,由于面试官的精力和能力有限,通过现有的面试流程,面试官很难从多个方面考察应聘者与岗位的匹配度;另一方面,由于参与面试的团队主管和团队成员往往没有受过专业的核人培训,通过现有的面试流程,面试官只能得到面试者的能力信息,无法得到更多关于应聘者格特征的信息,造成了信息的缺失。
发明内容
有鉴于此,本申请提供了一种面试人员的性格预测方法、装置及计算机非易失性可 读存储介质,主要目的在于解决现有技术的面试过程中难以获取面试人员性格特征信息的问题。
依据本申请一个方面,提供了一种面试人员的性格预测方法,该方法包括:获取多张携带性格测评结果标签的人脸样本图像;将所述人脸样本图像输入至卷积神经网络进行训练,构建性格预测模型,所述性格预测模型中记录有人脸图像与性格测评结果的映射关系;将面试人员的人脸图像输入至所述性格预测模型,预测出所述面试人员对应的性格测评结果。
依据本申请另一个方面,提供了一种面试人员的性格预测装置,该装置包括:获取单元,用于获取多张携带性格测评结果标签的人脸样本图像;构建单元,用于将所述获取单元获取的人脸样本图像输入至卷积神经网络进行训练,构建性格预测模型,所述性格预测模型中记录有人脸图像与性格测评结果的映射关系;预测单元,用于将面试人员的人脸图像输入至所述构建单元构建的性格预测模型,预测出所述面试人员对应的性格测评结果。
根据本申请实施例的第三方面,提供一种计算机非易失性可读存储介质,其上存储有计算机可读指令,该计算机可读指令被处理器执行时实现以下步骤:获取多张携带性格测评结果标签的人脸样本图像;将所述人脸样本图像输入至卷积神经网络进行训练,构建性格预测模型,所述性格预测模型中记录有人脸图像与性格测评结果的映射关系;将面试人员的人脸图像输入至所述性格预测模型,预测出所述面试人员对应的性格测评结果。
根据本申请实施例的第四方面,提供一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现以下步骤:获取多张携带性格测评结果标签的人脸样本图像;将所述人脸样本图像输入至卷积神经网络进行训练,构建性格预测模型,所述性格预测模型中记录有人脸图像与性格测评结果的映射关系;将面试人员的人脸图像输入至所述性格预测模型,预测出所述面试人员对应的性格测评结果。
借由上述技术方案,本申请提供的一种面试人员的性格预测方法及装置,将人脸样本图像输入至卷积神经网络进行训练,构建性格预测模型,该性格预测模型中记录有人脸图像与性格测评结果的映射关系,通过性格预测模型即可预测出面试人员在各个性格方向上的测评结果,与现有技术中面试官仅通过与面试人员交谈,并结合以往经验和数据判断面试者的性格特征的方法相比,本申请实施例通过构建性格预测模型,预测面试 人员在各个性格方向上的测评结果,为面试官提供更多关于面试人员性格特征的信息,有助于面试官全方位衡量面试人员与面试岗位和面试岗位团队的匹配度,便于公司对所需人才的甄选。
上述说明仅是本申请技术方案的概述,为了能够更清楚了解本申请的技术手段,而可依照说明书的内容予以实施,并且为了让本申请的上述和其它目的、特征和优点能够更明显易懂,以下特举本申请的具体实施方式。
附图说明
通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本申请的限制。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:
图1示出了本申请实施例提供的一种面试人员的性格预测方法流程示意图;
图2示出了本申请实施例提供的另一种面试人员的性格预测方法流程示意图;
图3示出了本申请实施例提供的衡量面试者与面试岗位团队的匹配情况的流程示意图;
图4示出了本申请实施例提供的又一种面试人员的性格预测方法流程示意图;
图5示出了本申请实施例提供的一种面试人员的性格预测装置的结构示意图;
图6示出了本申请实施例提供的另一种面试人员的性格预测装置的结构示意图;
图7示出了本申请实施例提供的又一种面试人员的性格预测装置的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。
本申请实施例提供了一种面试人员的性格预测方法,可以实现动态配置面试人员的性格预测信息的目的,如图1所示,该方法包括:
101、获取多张携带性格测评结果标签的人脸样本图像;其中,人脸样本图像可以为招聘公司提供的员工人脸图片,由于招聘公司在招聘员工的时候会建立员工档案,在 员工档案中会存储有员工人脸图片,具体可以从招聘公司的员工档案库中调取不同的员工人脸图片,并且人脸样本图像中携带有需要预测的不同性格方向的测评结果。
需要说明的是,本申请实施例对面试人员性格方向的划分不进行限定,通常情况下可以选定卡特尔16种人格因子作为面试人员性格方向的划分,,也可以选择其他方式对面试人员的性格方向进行划分,例如,通过面试人员的理智,情感,意志,内向,外向,独立等性格特征对面试人员的性格方向进行划分。
102、将所述人脸样本图像输入至卷积神经网络进行训练,构建性格预测模型,所述性格预测模型中记录有人脸图像与性格测评结果的映射关系;由于人脸样本图像中包含有不同性格方向上的测评结果,为了对不同性格测评结果的人脸样本图像进行区分,根据不同性格方向上的测评结果对人脸样本图像进行标记。
对于本申请实施例,卷积神经网络模型为可以训练出人脸图像与性格测评结果映射关系的网络结构,该网络结构相当于性格预测模型,通过性格预测模型对面试人员的性格进行预测。具体地,卷积神经网络模型可以为VGG-16网络结构,VGG-16包含5个阶段,全连接层和分类层,每个阶段包含一个最大池化层和多个卷积层,每层卷积核的个数从首阶段的64个开始,每个阶段增长一倍,一直到达512个,整个卷积神经网络VGG-16采用了同样大小的卷积核尺寸3*3和最大池化尺寸2*2。
103、将面试人员的人脸图像输入至所述性格预测模型,预测出所述面试人员对应的性格测评结果;其中,在面试人员同意的情况下,面试人员的人脸图像可以通过要求面试人员提前上传照片或者现场给面试人员拍照的方式获得,通过性格预测模型可以预测出面试人员在不同性格方向上的测评结果,对于性格方向的划分以及测评结果的设定,本申请实施例不进行限定。例如,面试人员的性格测评结果可以为面试人员在各个性格方向上的测评结果,并且为每种性格方向上的测评结果设定测评分数,当然还可以为每种性格方向的测评结果设定测评等级,将面试人员的人脸图像输入至性格预测模型,预测出面试人员在各个性格方向上的测评分数。
通过本申请,将人脸样本图像输入至卷积神经网络,构建性格预测模型,该性格预测模型中记录有人脸图像与性格测评结果的映射关系,通过性格预测模型即可预测出面试人员在不同性格方向上的性格测评结果,与现有技术中面试官仅通过与面试人员交谈,并结合以往经验和数据判断面试人员的性格特征的方法相比,本申请实施例通过构建性格预测模型,预测面试人员在不同性格方向上的性格测评结果,为面试官提供更多关于面试人员性格特征的信息,有助于面试官全方位衡量面试人员与招聘岗位和招聘团队的 匹配度,便于公司对所需人才的甄选。
本申请实施例提供了另一种面试人员的性格预测方法,可以实现面试人员的性格预测的目的,如图2所示,该方法包括:
201、获取人脸图像库中各个性格方向的人脸图像,所述人脸图像库中包含有不同性格测评结果的人脸图像;其中,人脸图像库为招聘公司在招聘员工时会建立的员工档案库,员工档案库中包含员工的人脸图像以及入职时进行性格测试的测评结果,每张员工的人脸图像对应唯一的性格测评结果。例如,从员工的档案库中抽取100张员工人脸图像以及所述员工人脸图像对应的性格测评结果作为训练样本,本申请对获取的人脸图像数量不做限定,需要说明的是,为保证后续建立模型的精度,获取的人脸图像数量不宜过少。
202、根据所述人脸图像库中的性格测评结果对人脸图像进行标记,得到携带不同性格测评结果标签的人脸图像;其中,性格测评结果可以为员工入职时进行性格测试的结果,该性格测评结果记录有员工在不同性格方向上的测评结果,该性格测评结果可以为员工在不同性格方向上的测评分数或者是测评等级等形式。对于本申请实施例,可以采用卡特尔16种人格因子对员工的性格方向进行划分,也可以采用其它方式对员工的性格方向进行划分,本申请不做限定。通过员工在不同性格方向上的测评结果对抽取的员工人脸图像进行标记,使每张员工的人脸图像携带对应的性格测评结果标签,具体标记时,可以通过手工方式或者通过软件LambleTool对获取的员工人脸图像进行标记,得到携带不同性格测评结果标签的人脸图像。
203、采用开源网络模型对所述各个性格方向的人脸图像进行背景过滤处理,得到不同性格方向的多张人脸样本图像;其中,开源网络模型可以为MTCNN网络模型,通过MTCNN开源网络模型对携带不同性格测评结果的人脸图像进行人脸检测。对于本申请实施例,通过开源网络模型MTCNN将获取的人脸图像进行背景过滤处理,检测出人脸,得到不同性格方向的多张人脸样本图像。具体地,MTCNN网络模型由P-Net网络结构,R-Net网络结构和O-Net网络结构组成。P-Net网络结构获取员工人脸图像中人脸区域的候选窗口和边界框的回归向量,并通过该边界框对人脸区域的候选窗口进行校准;R-Net网络结构通过P-Net网络结构获取的边界框回归向量和非极大值抑制方法去掉误判的人脸区域,高度重叠人脸区域的候选窗口;O-Net网络结构与R-Net网络结构的作用相似,通过O-Net网络结构输出人脸区域上的地标,检测出人脸。
204、将所述人脸样本图像输入至卷积神经网络进行训练,构建性格预测模型,所 述性格预测模型中记录有人脸图像与性格测评结果的映射关系;其中,性格预测模型可以为每个输入的人脸图像输出该人脸图像在各个性格方向上的测评结果。对于本申请实施例,卷积神经网络模型可以包括多层结构,具体可以通过卷积神经网络的卷积层提取人脸样本图像在各个性格方向的特征参数,这里的卷积层可以分为多个阶段,每个阶段都设置多个卷积层,通过卷积神经网络模型的全连接层汇总人脸样本图像在各个性格方向上的特征参数,得到多维度人脸图像在各个性格方向上的特征参数,通过卷积神经网络的池化层对多维度人脸图像在各个性格方向上的特征参数进行降维处理,得到人脸图像在各个性格方向上的权重向量,通过卷积神经网络的分类层根据人脸图像在各个性格方向上的权重向量生成人脸图像在各个性格方向上的性格测评结果,构建性格预测模型,该性格预测模型为每个输入的人脸图像输出该人脸图像在各个性格方向上的性格测评结果。
205、将面试人员的人脸图像输入至所述性格预测模型,预测出所述面试人员对应的性格测评结果;其中,面试人员为未知性格的用户,性格测评结果可以为面试人员在各个性格方向上的测评分数或者测评等级等方式,本申请不进行限定。对于本申请实施例,首先在面试人员同意的情况下,可以通过现场给面试人员拍照或者要求面试人员提前上传照片的方式,获取面试人员的照片,之后通过对面试人员的照片进行人脸检测,得到面试人员的人脸图像。最后将面试人员的人脸图像输入至构建的性格预测模型,预测出面试人员在各个性格方向上的测评结果。例如,通过卡特尔16种人格因子将面试人员的性格划分为16个方向,面试人员在各个人格因子上的测评分数为0-100分,通过性格预测模型的分类层,将面试人员的人脸图像分为16个部分,每个部分表示面试人员的人脸图像属于该类人格因子的概率,输出的16个部分分别与一个节点为10的全连层连接,通过该全连接层输出表示的是面试人员的人脸图像在当前人格因子各个得分区间的概率,之后计算输出概率乘以得分区间分值之和,最终得到该面试人员人脸图像在16种人格因子上的测评分数,具体地,通过全连接层输出,得到在各得分区间上的概率依次为:0.1,0.2,0.5,0.2,0,0,0,0,0,0,需要注意的是各得分区间的概率和必须为1,各得分区间的分值依次为90,80,70,60,50,40,30,20,10,0,该面试人员在这种人格因子上的测评分数为:
90*0.1+80*0.2+70*0.5+60*0.2+50*0+40*0+30*0+20*0+10*0+0*0=9+16+35+12=72分,即该名面试人员在这种人格因子上的测评分数为72分。以此类推,可以得到该名面试人员在其他人格因子上的测评分数。
206、根据面试人员所面试岗位的岗位技能设置所述面试岗位考察的性格方向;其中,岗位技能为面试人员所面试岗位要求的技能,根据面试人员所面试岗位要求的技能,设置与面试岗位相匹配的性格方向。例如,为了方便面试官衡量面试人员与所面试岗位的匹配度,可以预先将面试岗位分为多个类别,例如技术-不需过多交流(开发、研究、安全等职位),技术-需要交流(架构、项目等职位),非技术-不需过多交流(后勤、运营、财务等职位),非技术-需要交流(产品、售前、零售等职位),针对不同岗位类别选取预设个数的性格方向作为该类岗位的考察依据,具体地,可以从卡特尔16种人格因子中选取预设个数的人格因子作为面试岗位的考察依据,并为预设个数的人格因子设定匹配分数或者考察等级,对于考察的具体形式本申请不进行限定。
207、从所述面试人员对应的性格测评结果中提取出面试人员在所述面试岗位考察的性格方向上的测评结果;其中,面试岗位考察的性格方向为面试岗位的预设个数的性格方向,面试人员的性格测评结果包含面试人员所面试岗位考察的性格方向上的测评结果。例如,面试人员所面试岗位考察的性格方向为卡特尔16种人格因子中的乐群性因子,聪慧性因子,稳定性因子,恃强性因子,忧虑性因子,独立性因子,自律性因子,则从面试人员性格测评结果中分别提取乐群性因子,聪慧性因子,稳定性因子,恃强性因子,忧虑性因子,独立性因子,自律性因子对应的测评分数,作为之后考察面试人员与岗位匹配度的考察分数。
208、根据所述面试人员在所述面试岗位考察的性格方向上的测评结果,确定面试人员与面试岗位的匹配情况;其中,面试人员在所述面试岗位考察的性格方向上的测评结果可以为测评分数或者测评等级,本申请对考察的具体方式不进行限定。对于本申请实施例,可以预先为面试人员所面试的岗位设定考察标准,若面试人员在面试岗位考察的性格方向上的测评结果达到考察标准,则面试人员与所面试岗位匹配;若面试人员在面试岗位考察的性格方向上的测评结果未达到考察标准,则面试人员与所面试岗位不匹配。为了方便面试官衡量面试者与面试团队岗位的匹配度,获取面试岗位团队在各个性格方向上的人员需求情况,可以在将面试人员的人脸图像输入至性格预测模型,预测出面试人员对应的性格测评结果之后,执行下述步骤209和步骤210,从而可以招聘到更符合面试岗位团队的面试者。需要说明的是,在执行步骤205之后,需要衡量面试者与面试岗位团队的匹配度,具体步骤209和步骤210的执行过程并不限定在步骤S206至步骤208之后执行,如图3所示,具体衡量面试者与面试岗位团队的匹配情况包括以下步骤:
209、根据所述面试人员所面试岗位团队的预期岗位构成,获取面试岗位团队中各个岗位样本人员对应的性格测评结果;其中,岗位样本人员为面试岗位团队中各个岗位的在职人员。对于本申请实施例,获取面试岗位团队中各个岗位的样本人员的人脸图像,将面试岗位团队中各个岗位的样本人员的人脸图像输入至性格预测模型,预测出面试岗位团队中各个岗位样本人员在各个性格方向上的测评结果,例如,通过卡特尔16种人格因子将面试岗位团队中各个岗位样本人员的性格方向划分为16个方向,样本人员在各个人格因子上的测评分数为0-100分,通过性格预测模型,得到面试岗位团队中各个岗位样本人员在16种人格因子上的测评分数。
210、将所述样本人员在各个性格方向上的测评结果的平均分布与预先设置符合面试岗位团队的测评标准进行比较,获取所述面试岗位团队在各个性格方向上的人员需求情况;对于本申请实施例,如果样本人员在某一性格方向上的测评结果的平均分布达到所述预先设置符合面试岗位团队的测评标准,则判定面试岗位团队在某一性格方向上为过剩情况,过剩情况说明该面试岗位团队中该性格方向的人员已经充足,无需多于配置该性格方向上的人员;如果样本人员在某一性格方向上的测评结果的平均分布未达到预先设置符合面试岗位团队的测评标准,则判定面试岗位团队在某一性格方向上为稀缺情况,稀缺情况说明该面试岗位团队中该性格方向的人员存在缺失,可以在该性格方向上招聘人员,这里通过判定面试岗位团队在某一性格方向上的情况,进而方便面试官衡量面试者与面试岗位团队的匹配程度,从而招聘到更符合面试岗位团队的面试人员。
例如,面试岗位团队的样本人员为5人,通过将样本人员的人脸图像输入至性格预测模型,得知每个样本人员在16种人格因子上的测评分数,通过将5人在每种人格因子上的测评分数对应相加求其均值,得到团队样本人员在各个人格因子上的平均分值,例如,团队样本人员在16种人格因子上的平均分值依次群性因子70分,聪慧性因子79分,稳定性因子66分,恃强性因子75分,兴奋性因子88分,有恒性因子82分,敢为性因子55分,敏感性因子77分,怀疑性因子75分,幻想性因子72分,事故性因子74分,忧虑性因子70分,实验性因子80分,独立性因子50分,自律性因子52分,紧张性因子70分,面试岗位团队在各个人格因子上预先设置的测评标准分别为70分,通过比较得知,团队的样本人员在稳定性因子,独立性因子,自律性因子,敢为性因子的测评分数未达到测评标准,则面试岗位团队在独立性因子,自律性因子,敢为性因子,稳定性因子上为稀缺情况,也就是说该面试岗位团队需求以上几种性格方向的面试人员;同理,团队的样本人员在其他人格因子上的测评分数达到测评标准,则面试岗位团 队在其他人格因子上为过剩情况,也就是说该面试岗位团队并不需求以上几种其他性格方向的面试人员。
通过本申请,将人脸样本图像输入至卷积神经网络,构建性格预测模型,该性格预测模型中记录有人脸图像与性格测评结果的映射关系,通过性格预测模型即可预测出面试人员在不同性格方向上的性格测评结果,与现有技术中面试官仅通过与面试人员交谈,并结合以往经验和数据判断面试人员的性格特征的方法相比,本申请实施例通过构建性格预测模型,预测面试人员在不同性格方向上的性格测评结果,为面试官提供更多关于面试人员性格特征的信息,并通过将面试人员的测评结果和面试岗位与团队设定的测评标准进行比较,使面试官能够全方位衡量面试人员与招聘岗位和招聘团队的匹配度,便于公司对所需人才的甄选。
本申请实施例提供了又一种面试人员的性格预测方法,可以实现面试人员的性格预测的目的,如图4所示,该方法包括:
301、获取多张携带性格测评结果标签的人脸样本图像;对于本申请实施例,获取人脸样本图像的具体实现方法与步骤101相同,在此不进行赘述。需要说明的是,不同性格方向的测评结果可以是面试人员在不同性格方向上的测评分数或者是面试人员在不同性格方向上的测评等级,本申请实施例不进行限定。
302、将所述人脸样本图像输入至卷积神经网络进行训练,构建性格预测模型,所述性格预测模型中记录有人脸图像与性格测评结果的映射关系;
对于本申请实施例,卷积神经网络模型不仅可以选用VGG-16模型,该VGG-16模型包含5个阶段,全连接层和分类层,每个阶段包含一个最大池化层和多个卷积层,每层卷积核的个数从首阶段的64个开始,每个阶段增长一倍,一直到达512个,整个卷积神经网络VGG-16采用了同样大小的卷积核尺寸3*3和最大池化尺寸2*2。具体使用VGG-16模型进行训练的过程可以包括但不局限于下述实现方式:通过VGG-16每个阶段的卷积层提取所述人脸样本图像在各个性格方向上的特征参数,人脸样本图像输入该VGG-16模型后,经过第一阶段的卷积层之后,输出图像的维度变为64,经过第二阶段的卷积层之后,输出图像的维度变为128,经过第三阶段,第四阶段和第五阶段的卷积层之后,输出图像的维度分别为256,512和512;VGG16模型中的全连接层位于第五阶段之后,通过VGG16模型中的全连接层汇总所述人脸样本图像在各个性格方向上的特征参数,得到多维度人脸图像在各个性格方向上的特征参数;VGG-16模型包含了5个池化层,其分别位于每个阶段的卷积层后面,即5个池化层位于每个阶段的最后,通过VGG-16 模型的池化层对所述多维度人脸图像在各个性格方向上的特征参数进行降维处理,得到人脸图像在各个性格方向上的权重向量;VGG-16模型的最后一层为分类层,通过VGG-16模型的分类层对各个性格方向上的权重向量进行分类,生成人脸图像在各个性格方向上的性格测评结果,构建性格预测模型。该性格预测模型为每个输入的面试人员的人脸图像输出该面试人员对应的在各个性格方向上的性格测评结果。例如,通过卡特尔16种人格因子将面试人员的性格划分为16个方向,即将性格预测模型的输出分为16类,将面试人员的人脸图像输入至构建的性格预测模型,预测出面试人员在16个性格方向上的测评结果。
303、将面试人员的人脸图像输入至所述性格预测模型,预测出所述面试人员对应的性格测评结果;对于本申请实施例,获取面试人员的照片之后,对面试人员的照片进行人脸检测,去掉面试人员照片多余的背景,通过开源网络模型MTCNN对获取的面试人员照片进行背景过滤处理,得到面试人员的人脸图像,当然也可也采用其他方式对面试人员的照片进行人脸检测,本申请不进行限定。
304、根据面试人员所面试岗位的岗位技能设置所述面试岗位考察的性格方向;需要说明的是,这里并不局限于通过卡特尔16种人格因子设置面试岗位考察的性格方向,也可以采取其他方式设置面试人员所面试岗位考察的性格方向,例如,对面试人员的理智,情感,意志,内向,外向,独立,顺从等性格特征进行考察。
305、从所述面试人员对应的性格测评结果中提取出面试人员在所述面试岗位考察的性格方向上的测评结果;需要说明的是,这里提取面试人员在考察性格方向上的测评结果的方式与步骤207相同,再此不进行赘述。
306a、如果所述面试人员在所述面试岗位考察的性格方向上的测评结果达到预先设置符合面试岗位的测评标准,则判定所述面试人员与所述面试岗位相匹配;例如,卡特尔16种人格因子中的独立性因子,自律性因子,敢为性因子,稳定性因子为面试岗位的考察性格方向,并且为面试岗位的4种考察因子分别设定考察分数,均为70分,如果面试人员在独立性因子,自律性因子,敢为性因子,稳定性因子上的测评分数均高于4种考察因子对应的预设考察分数70分,则判定面试人员与面试岗位匹配。
与步骤306a对应的有步骤306b、如果所述面试人员在所述面试岗位考察的性格方向上的测评结果未达到预先设置符合面试岗位的测评标准,则判定所述面试人员与所述面试岗位不匹配;例如,卡特尔16种人格因子中的独立性因子,自律性因子,敢为性因子,稳定性因子为面试岗位的考察性格方向,并且为面试岗位的4种考察因子分别设 定考察分数,均为70分,如果面试人员在独立性因子,自律性因子,敢为性因子,稳定性因子中任一种考察因子上的测评分数低于4种考察因子对应的预设考察分数70分,则判定面试人员与面试岗位不匹配,需要对面试人员进行进一步考察。
307b、从底层数据库中随机挑选在所述面试岗位考察的性格方向上的面试题目,根据所述面试题目对所述面试人员进行考察;其中,所述底层数据库中预先存储有各个性格方向上的面试题目。对于本申请实施例,如果面试人员在面试岗位考察性格方向上的测评结果未达到预先设置符合面试岗位的测评标准,则判定面试人员与面试岗位不匹配,需要对面试人员进行进一步考察,面试官从底层数据库中随机抽取面试人员在未达到测评标准的性格方向上的面试题目,对面试人员进行提问,根据面试人员回答问题的情况和面试人在面试岗位考察性格方向上的测评结果,决定最终是否录用该面试人员。
通过本申请,将人脸样本图像输入至卷积神经网络,构建性格预测模型,该性格预测模型中记录有人脸图像与性格测评结果的映射关系,通过性格预测模型即可预测出面试人员在不同性格方向上的性格测评结果,与现有技术中面试官仅通过与面试人员交谈,并结合以往经验和数据判断面试人员的性格特征的方法相比,本申请实施例通过构建性格预测模型,预测面试人员在不同性格方向上的性格测评结果,为面试官提供更多关于面试人员性格特征的信息,并通过将面试人员的测评结果和面试岗位与团队设定的测评标准进行比较,使面试官能够全方位衡量面试人员与招聘岗位和招聘团队的匹配度,便于公司对所需人才的甄选。
进一步地,作为图1所述方法的具体实现,本申请实施例提供了一种面试人员的性格预测装置,如图5所示,所装置包括:获取单元41、构建单元42、预测单元43,
获取单元41,可以用于获取多张携带性格测评结果标签的人脸样本图像;构建单元42,可以用于将所述获取单元获取的人脸样本图像输入至卷积神经网络进行训练,构建性格预测模型,所述性格预测模型中记录有人脸图像与性格测评结果的映射关系;预测单元43,可以用于将面试人员的人脸图像输入至所述构建单元构建的性格预测模型,预测出所述面试人员对应的性格测评结果。
本申请实施例提供的面试人员的性格预测装置,首先将人脸样本图像输入至卷积神经网络进行训练,构建性格预测模型,该性格预测模型中记录有人脸图像与性格测评结果的映射关系,之后通过性格预测模型即可预测出面试人员在各个性格方向上的测评结果,与现有技术中面试官仅通过与面试人员交谈,并结合以往经验和数据判断面试者的性格特征的方法相比,本申请实施例通过构建性格预测模型,预测面试人员在各个性格 方向上的测评结果,为面试官提供更多关于面试人员性格特征的信息,有助于面试官全方位衡量面试人员与面试岗位和面试岗位团队的匹配度,便于公司对所需人才的甄选。
作为图5中所示面试人员的性格预测装置的进一步说明,图6是根据本申请实施例另一种面试人员的性格预测装置的结构示意图,如图6所示,该装置还包括:
设置单元44,可以用于根据面试人员所面试岗位的岗位技能设置所述面试岗位考察的性格方向;提取单元45,可以用于从所述面试人员对应的性格测评结果中提取出面试人员在所述面试岗位考察的性格方向上的测评结果;
确定单元46,可以用于根据所述提取单元提取的面试人员在所述面试岗位考察的性格方向上的测评结果,确定面试人员与面试岗位的匹配情况。
进一步地,所述装置还包括:比较单元47,所述获取单元41,还可以用于根据所述面试人员所面试岗位团队的预期岗位构成,获取面试岗位团队中各个岗位样本人员对应的性格测评结果;所述比较单元47,可以用于将所述样本人员在各个性格方向上的测评结果的平均分布与预先设置符合面试岗位团队的测评标准进行比较,获取所述面试岗位团队在各个性格方向上的人员需求情况。
进一步地,所述比较单元47,具体可以用于如果所述样本人员在某一性格方向上的测评结果的平均分布达到所述预先设置符合面试岗位团队的测评标准,则判定所述面试岗位团队在某一性格方向上为过剩情况;所述比较单元47,具体还可以用于如果所述样本人员在某一性格方向上的测评结果的平均分布未达到所述预先设置符合面试岗位团队的测评标准,则判定所述面试岗位团队在某一性格方向上为稀缺情况。
进一步地,所述获取单元41包括:获取模块411,可以用于获取人脸图像库中各个性格方向的人脸图像,所述人脸图像库中包含有不同性格测评结果的人脸图像;过滤模块413,可以用于采用开源网络模型对所述获取模块获取的各个性格方向的人脸图像进行背景过滤处理,得到不同性格方向的多张人脸样本图像。
进一步地,所述获取单元41还包括:标记模块412,可以用于根据所述人脸图像库中的性格测评结果对人脸图像进行标记,得到携带不同性格测评结果标签的人脸图像。
进一步地,所述构建单元42包括:提取模块421,可以用于通过所述卷积神经网络模型的卷积层提取所述人脸样本图像在各个性格方向上的特征参数;汇总模块422,可以用于通过所述卷积神经网络模型的全连接层汇总所述人脸样本图像在各个性格方向上的特征参数,得到多维度人脸图像在各个性格方向上的特征参数;降维模块423,可以用于通过所述卷积神经网络模型的池化层对所述多维度人脸图像在各个性格方向上 的特征参数进行降维处理,得到人脸图像在各个性格方向上的权重向量;生成模块424,可以用于通过所述卷积神经网络模型的分类层根据所述人脸图像在各个性格方向上的权重向量生成人脸图像在各个性格方向上的性格测评结果,构建性格预测模型。
作为图5中所示面试人员的性格预测装置的进一步说明,图7是根据本申请实施例又一种面试人员的性格预测装置的结构示意图,如图7所示,所述确定单元46包括判定模块461,可以用于如果所述面试人员在所述面试岗位考察的性格方向上的测评结果达到预先设置符合面试岗位的测评标准,则判定所述面试人员与所述面试岗位相匹配;所述判定模块461,还可以用于如果所述面试人员在所述面试岗位考察的性格方向上的测评结果未达到预先设置符合面试岗位的测评标准,则判定所述面试人员与所述面试岗位不匹配。
进一步地,所述确定单元46还包括:挑选模块462,可以用于从底层数据库中随机挑选在所述面试岗位考察的性格方向上的面试题目,根据所述面试题目对所述面试人员进行考察,所述底层数据库中预先存储有各个性格方向上的面试题目。
需要说明的是,本实施例提供的一种面试人员的性格预测装置所涉及各功能单元的其他相应描述,可以参考图1至图4中的对应描述,在此不再赘述。
基于上述如图1至图4所示方法,相应的,本实施例还提供了一种非易失性可读存储介质,其上存储有计算机可读指令,该可读指令被处理器执行时实现上述如图1至图4所示的面试人员的性格预测方法。
基于这样的理解,本申请的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性可读存储介质(可以是CD-ROM,U盘,移动硬盘等)中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施场景所述的方法。
基于上述如图1至图4所示的方法,以及图5至图7所示的虚拟装置实施例,为了实现上述目的,本申请实施例还提供了一种计算机设备,具体可以为个人计算机、服务器、网络设备等,该实体设备包括非易失性可读存储介质和处理器;非易失性可读存储介质,用于存储计算机可读指令;处理器,用于执行计算机可读指令以实现上述如图1至图4所示的面试人员的性格预测方法。
可选地,该计算机设备还可以包括用户接口、网络接口、摄像头、射频(Radio Frequency,RF)电路,传感器、音频电路、WI-FI模块等等。用户接口可以包括显示屏(Display)、输入单元比如键盘(Keyboard)等,可选用户接口还可以包括USB接口、 读卡器接口等。网络接口可选的可以包括标准的有线接口、无线接口(如蓝牙接口、WI-FI接口)等。
本领域技术人员可以理解,本实施例提供的面试人员的性格预测的实体设备结构并不构成对该实体设备的限定,可以包括更多或更少的部件,或者组合某些部件,或者不同的部件布置。
非易失性可读存储介质中还可以包括操作系统、网络通信模块。操作系统是管理上述计算机设备硬件和软件资源的程序,支持信息处理程序以及其它软件和/或程序的运行。网络通信模块用于实现非易失性可读存储介质内部各组件之间的通信,以及与该实体设备中其它硬件和软件之间通信。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到本申请可以借助软件加必要的通用硬件平台的方式来实现,也可以通过硬件实现。通过应用本申请的技术方案,与目前现有技术相比,通过将人脸样本图像输入至卷积神经网络进行训练,构建性格预测模型,预测面试人员在各个性格方向上的测评结果,为面试官提供更多关于面试人员性格特征的信息,有助于面试官全方位衡量面试人员与面试岗位和面试岗位团队的匹配度,便于公司对所需人才的甄选。
本领域技术人员可以理解附图只是一个优选实施场景的示意图,附图中的模块或流程并不一定是实施本申请所必须的。本领域技术人员可以理解实施场景中的装置中的模块可以按照实施场景描述进行分布于实施场景的装置中,也可以进行相应变化位于不同于本实施场景的一个或多个装置中。上述实施场景的模块可以合并为一个模块,也可以进一步拆分成多个子模块。
上述本申请序号仅仅为了描述,不代表实施场景的优劣。以上公开的仅为本申请的几个具体实施场景,但是,本申请并非局限于此,任何本领域的技术人员能思之的变化都应落入本申请的保护范围。

Claims (20)

  1. 一种面试人员的性格预测方法,其特征在于,包括:
    获取多张携带性格测评结果标签的人脸样本图像;
    将所述人脸样本图像输入至卷积神经网络进行训练,构建性格预测模型,所述性格预测模型中记录有人脸图像与性格测评结果的映射关系;
    将面试人员的人脸图像输入至所述性格预测模型,预测出所述面试人员对应的性格测评结果。
  2. 根据权利要求1所述的方法,其特征在于,所述获取多张携带性格测评结果标签的人脸样本图像包括:
    获取人脸图像库中各个性格方向的人脸图像,所述人脸图像库中包含有不同性格测评结果的人脸图像;
    采用开源网络模型对所述各个性格方向的人脸图像进行背景过滤处理,得到不同性格方向的多张人脸样本图像;
    根据所述人脸图像库中的性格测评结果对人脸图像进行标记,得到携带不同性格测评结果标签的人脸图像。
  3. 根据权利要求1所述的方法,其特征在于,所述卷积神经网络模型包括多层结构,所述将所述多张人脸样本图像输入至卷积神经网络进行训练,构建性格预测模型包括:
    通过所述卷积神经网络模型的卷积层提取所述人脸样本图像在各个性格方向上的特征参数;
    通过所述卷积神经网络模型的全连接层汇总所述人脸样本图像在各个性格方向上的特征参数,得到多维度人脸图像在各个性格方向上的特征参数;
    通过所述卷积神经网络模型的池化层对所述多维度人脸图像在各个性格方向上的特征参数进行降维处理,得到人脸图像在各个性格方向上的权重向量;
    通过所述卷积神经网络模型的分类层根据所述人脸图像在各个性格方向上的权重向量生成人脸图像在各个性格方向上的性格测评结果,构建性格预测模型。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述性格测评结果包括面试人员在各个性格方向上的测评结果,在所述将面试人员的人脸图像输入至所述性格预测模型,预测出所述面试人员对应的性格测评结果之后,所述方法还包括:
    根据面试人员所面试岗位的岗位技能设置所述面试岗位考察的性格方向;
    从所述面试人员对应的性格测评结果中提取出面试人员在所述面试岗位考察的性格方向上的测评结果;
    根据所述面试人员在所述面试岗位考察的性格方向上的测评结果,确定面试人员与面试岗位的匹配情况。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述面试人员在所述面试岗位考察的性格方向上的测评结果,确定面试人员与面试岗位的匹配情况包括:
    如果所述面试人员在所述面试岗位考察的性格方向上的测评结果达到预先设置符合面试岗位的测评标准,则判定所述面试人员与所述面试岗位相匹配;
    如果所述面试人员在所述面试岗位考察的性格方向上的测评结果未达到预先设置符合面试岗位的测评标准,则判定所述面试人员与所述面试岗位不匹配。
  6. 根据权利要求5所述的方法,其特征在于,在所述如果所述面试人员在所述面试岗位考察的性格方向上的测评结果未达到预先设置符合面试岗位的测评标准,则判定所述面试人员与所述面试岗位不匹配之后,所述方法还包括:
    从底层数据库中随机挑选在所述面试岗位考察的性格方向上的面试题目,根据所述面试题目对所述面试人员进行考察,所述底层数据库中预先存储有各个性格方向上的面试题目。
  7. 根据权利要求5所述的方法,其特征在于,在所述将面试人员的人脸图像输入至所述性格预测模型,预测出所述面试人员对应的性格测评结果之后,所述方法还包括:
    根据所述面试人员所面试岗位团队的预期岗位构成,获取面试岗位团队中各个岗位样本人员对应的性格测评结果;
    将所述样本人员在各个性格方向上的测评结果的平均分布与预先设置符合面试岗位团队的测评标准进行比较,获取所述面试岗位团队在各个性格方向上的人员需求情况。
  8. 一种面试人员的性格预测装置,其特征在于,所述装置包括:
    获取单元,用于获取多张携带性格测评结果标签的人脸样本图像;
    构建单元,用于将所述获取单元获取的人脸样本图像输入至卷积神经网络进行训练,构建性格预测模型,所述性格预测模型中记录有人脸图像与性格测评结果的映射关系;
    预测单元,用于将面试人员的人脸图像输入至所述构建单元构建的性格预测模型, 预测出所述面试人员对应的性格测评结果。
  9. 根据权利要求8所述的装置,其特征在于,所述获取单元包括:
    获取模块,用于获取人脸图像库中各个性格方向的人脸图像,所述人脸图像库中包含有不同性格测评结果的人脸图像;
    过滤模块,用于采用开源网络模型对所述获取模块获取的各个性格方向的人脸图像进行背景过滤处理,得到不同性格方向的多张人脸样本图像;
    标记模块,用于根据所述人脸图像库中的性格测评结果对人脸图像进行标记,得到携带不同性格测评结果标签的人脸图像。
  10. 根据权利要求8所述的装置,其特征在于,所述构建单元包括:
    提取模块,用于通过所述卷积神经网络模型的卷积层提取所述人脸样本图像在各个性格方向上的特征参数;
    汇总模块,用于通过所述卷积神经网络模型的全连接层汇总所述人脸样本图像在各个性格方向上的特征参数,得到多维度人脸图像在各个性格方向上的特征参数;
    降维模块,用于通过所述卷积神经网络模型的池化层对所述多维度人脸图像在各个性格方向上的特征参数进行降维处理,得到人脸图像在各个性格方向上的权重向量;
    生成模块,用于通过所述卷积神经网络模型的分类层根据所述人脸图像在各个性格方向上的权重向量生成人脸图像在各个性格方向上的性格测评结果,构建性格预测模型。
  11. 根据权利要求8-10中任一项所述的装置,其特征在于,所述装置还包括:
    设置单元,用于根据面试人员所面试岗位的岗位技能设置所述面试岗位考察的性格方向;
    提取单元,用于从所述面试人员对应的性格测评结果中提取出面试人员在所述面试岗位考察的性格方向上的测评结果;
    确定单元,用于根据所述提取单元提取的面试人员在所述面试岗位考察的性格方向上的测评结果,确定面试人员与面试岗位的匹配情况。
  12. 根据权利要求11所述的装置,其特征在于,所述确定单元包括:
    判定模块,用于如果所述面试人员在所述面试岗位考察的性格方向上的测评结果达到预先设置符合面试岗位的测评标准,则判定所述面试人员与所述面试岗位相匹配;
    所述判定模块,还用于如果所述面试人员在所述面试岗位考察的性格方向上的测评结果未达到预先设置符合面试岗位的测评标准,则判定所述面试人员与所述面试岗位不 匹配。
  13. 根据权利要求11所述的装置,其特征在于,所述确定单元还包括:
    挑选模块,用于从底层数据库中随机挑选在所述面试岗位考察的性格方向上的面试题目,根据所述面试题目对所述面试人员进行考察,所述底层数据库中预先存储有各个性格方向上的面试题目。
  14. 根据权利要求12所述的装置,其特征在于,所述装置还包括:比较单元,
    所述获取单元,还用于根据所述面试人员所面试岗位团队的预期岗位构成,获取面试岗位团队中各个岗位样本人员对应的性格测评结果;
    所述比较单元,用于将所述样本人员在各个性格方向上的测评结果的平均分布与预先设置符合面试岗位团队的测评标准进行比较,获取所述面试岗位团队在各个性格方向上的人员需求情况。
  15. 一种计算机非易失性可读存储介质,其上存储有计算机可读指令,其特征在于,所述计算机可读指令被处理器执行时实现面试人员的性格预测方法,包括:
    获取多张携带性格测评结果标签的人脸样本图像;将所述人脸样本图像输入至卷积神经网络进行训练,构建性格预测模型,所述性格预测模型中记录有人脸图像与性格测评结果的映射关系;将面试人员的人脸图像输入至所述性格预测模型,预测出所述面试人员对应的性格测评结果。
  16. 根据权利要求15所述的计算机非易失性可读存储介质,其特征在于,所述计算机可读指令被处理器执行时实现所述获取多张携带性格测评结果标签的人脸样本图像包括:
    获取人脸图像库中各个性格方向的人脸图像,所述人脸图像库中包含有不同性格测评结果的人脸图像;采用开源网络模型对所述各个性格方向的人脸图像进行背景过滤处理,得到不同性格方向的多张人脸样本图像;根据所述人脸图像库中的性格测评结果对人脸图像进行标记,得到携带不同性格测评结果标签的人脸图像。
  17. 根据权利要求15所述的计算机非易失性可读存储介质,其特征在于,所述计算机可读指令被处理器执行时实现所述卷积神经网络模型包括多层结构,所述将所述多张人脸样本图像输入至卷积神经网络进行训练,构建性格预测模型包括:
    通过所述卷积神经网络模型的卷积层提取所述人脸样本图像在各个性格方向上的特征参数;通过所述卷积神经网络模型的全连接层汇总所述人脸样本图像在各个性格方向上的特征参数,得到多维度人脸图像在各个性格方向上的特征参数;通过所述卷积神 经网络模型的池化层对所述多维度人脸图像在各个性格方向上的特征参数进行降维处理,得到人脸图像在各个性格方向上的权重向量;通过所述卷积神经网络模型的分类层根据所述人脸图像在各个性格方向上的权重向量生成人脸图像在各个性格方向上的性格测评结果,构建性格预测模型。
  18. 一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现面试人员的性格预测方法,包括:
    获取多张携带性格测评结果标签的人脸样本图像;将所述人脸样本图像输入至卷积神经网络进行训练,构建性格预测模型,所述性格预测模型中记录有人脸图像与性格测评结果的映射关系;将面试人员的人脸图像输入至所述性格预测模型,预测出所述面试人员对应的性格测评结果。
  19. 根据权利要求18所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时实现所述获取多张携带性格测评结果标签的人脸样本图像包括:
    获取人脸图像库中各个性格方向的人脸图像,所述人脸图像库中包含有不同性格测评结果的人脸图像;采用开源网络模型对所述各个性格方向的人脸图像进行背景过滤处理,得到不同性格方向的多张人脸样本图像;根据所述人脸图像库中的性格测评结果对人脸图像进行标记,得到携带不同性格测评结果标签的人脸图像。
  20. 根据权利要求18所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时实现所述卷积神经网络模型包括多层结构,所述将所述多张人脸样本图像输入至卷积神经网络进行训练,构建性格预测模型包括:
    通过所述卷积神经网络模型的卷积层提取所述人脸样本图像在各个性格方向上的特征参数;通过所述卷积神经网络模型的全连接层汇总所述人脸样本图像在各个性格方向上的特征参数,得到多维度人脸图像在各个性格方向上的特征参数;通过所述卷积神经网络模型的池化层对所述多维度人脸图像在各个性格方向上的特征参数进行降维处理,得到人脸图像在各个性格方向上的权重向量;通过所述卷积神经网络模型的分类层根据所述人脸图像在各个性格方向上的权重向量生成人脸图像在各个性格方向上的性格测评结果,构建性格预测模型。
PCT/CN2019/073939 2018-11-09 2019-01-30 面试人员的性格预测方法、装置及计算机可读存储介质 WO2020093614A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811331347.9A CN109710272A (zh) 2018-11-09 2018-11-09 更新文件的封装方法及装置
CN201811331347.9 2018-11-09

Publications (1)

Publication Number Publication Date
WO2020093614A1 true WO2020093614A1 (zh) 2020-05-14

Family

ID=66254159

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/073939 WO2020093614A1 (zh) 2018-11-09 2019-01-30 面试人员的性格预测方法、装置及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN109710272A (zh)
WO (1) WO2020093614A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110570167B (zh) * 2019-08-28 2023-04-07 珠海格力智能装备有限公司 工程项目文件管理方法和系统
CN113409125A (zh) * 2021-07-14 2021-09-17 京东安联财产保险有限公司 推送信息的处理方法、装置、电子设备和可读介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108021864A (zh) * 2017-11-02 2018-05-11 平安科技(深圳)有限公司 人物性格分析方法、装置及存储介质
CN108038414A (zh) * 2017-11-02 2018-05-15 平安科技(深圳)有限公司 基于循环神经网络的人物性格分析方法、装置及存储介质
CN108197574A (zh) * 2018-01-04 2018-06-22 张永刚 人物风格识别方法、终端及计算机可读存储介质
CN108205661A (zh) * 2017-12-27 2018-06-26 浩云科技股份有限公司 一种基于深度学习的atm机异常人脸检测方法
US20180190377A1 (en) * 2016-12-30 2018-07-05 Dirk Schneemann, LLC Modeling and learning character traits and medical condition based on 3d facial features
CN109657542A (zh) * 2018-11-09 2019-04-19 深圳壹账通智能科技有限公司 面试人员的性格预测方法、装置、计算机设备及计算机存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020116496A1 (en) * 2001-02-16 2002-08-22 Gemini Networks, Inc. System, method, and computer program product for dynamic bandwidth provisioning
CN101256492A (zh) * 2008-03-31 2008-09-03 宋乃辉 一种进行模型驱动架构的软件开发方法及其系统
CN108121820A (zh) * 2017-12-29 2018-06-05 北京奇虎科技有限公司 一种基于移动终端的搜索方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180190377A1 (en) * 2016-12-30 2018-07-05 Dirk Schneemann, LLC Modeling and learning character traits and medical condition based on 3d facial features
CN108021864A (zh) * 2017-11-02 2018-05-11 平安科技(深圳)有限公司 人物性格分析方法、装置及存储介质
CN108038414A (zh) * 2017-11-02 2018-05-15 平安科技(深圳)有限公司 基于循环神经网络的人物性格分析方法、装置及存储介质
CN108205661A (zh) * 2017-12-27 2018-06-26 浩云科技股份有限公司 一种基于深度学习的atm机异常人脸检测方法
CN108197574A (zh) * 2018-01-04 2018-06-22 张永刚 人物风格识别方法、终端及计算机可读存储介质
CN109657542A (zh) * 2018-11-09 2019-04-19 深圳壹账通智能科技有限公司 面试人员的性格预测方法、装置、计算机设备及计算机存储介质

Also Published As

Publication number Publication date
CN109710272A (zh) 2019-05-03

Similar Documents

Publication Publication Date Title
JP6700396B2 (ja) 才能のデータ駆動型識別のシステム及び方法
JP6386107B2 (ja) グローバルモデルからの局所化された学習
Li et al. Influence of entrepreneurial experience, alertness, and prior knowledge on opportunity recognition
Bunel et al. Key issues in local job accessibility measurement: Different models mean different results
JP2019519021A (ja) パフォーマンスモデル悪影響補正
US10971255B2 (en) Multimodal learning framework for analysis of clinical trials
CN106663240A (zh) 用于对人才的数据驱动辨识的系统和方法
CN109657542A (zh) 面试人员的性格预测方法、装置、计算机设备及计算机存储介质
WO2020093614A1 (zh) 面试人员的性格预测方法、装置及计算机可读存储介质
CN109872026A (zh) 评测结果生成方法、装置、设备及计算机可读存储介质
Al Sheikh et al. Developing and implementing a barcode based student attendance system
Shah et al. When is it better to compare than to score?
Schecter et al. The power, accuracy, and precision of the relational event model
KR20220007193A (ko) 기계학습모델을 이용하여 면접영상에 대한 자동화된 평가를 위한 심층질문을 도출하는 방법, 시스템 및 컴퓨터-판독가능 매체
Saravanan et al. Meeting report: tissue-based image analysis
US10803318B1 (en) Automated scoring of video clips using extracted physiological features
Calikli et al. An algorithmic approach to missing data problem in modeling human aspects in software development
KR102407212B1 (ko) 기계학습모델을 이용하여 면접영상에 대한 자동화된 평가를 제공하는 방법, 시스템 및 컴퓨터-판독가능 매체
Ahmad et al. Path analysis of genuine leadership and job life of teachers.
Constantin et al. How Do Algorithmic Fairness Metrics Align with Human Judgement? A Mixed-Initiative System for Contextualized Fairness Assessment
Li et al. Deep learning-based visual identification of signs of bat presence in bridge infrastructure images: A transfer learning approach
Danbatta et al. Predicting student’s final graduation CGPA using data mining and regression methods: a case study of Kano informatics institute
JP7079882B1 (ja) 情報処理方法、コンピュータプログラム及び情報処理装置
Oulhaj et al. Testing for qualitative heterogeneity: An application to composite endpoints in survival analysis
Zhong et al. Predicting Collaborative Task Performance Using Graph Interlocutor Acoustic Network in Small Group Interaction.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19882120

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05/08/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19882120

Country of ref document: EP

Kind code of ref document: A1