WO2021042842A1 - 基于ai面试系统的面试方法、装置和计算机设备 - Google Patents

基于ai面试系统的面试方法、装置和计算机设备 Download PDF

Info

Publication number
WO2021042842A1
WO2021042842A1 PCT/CN2020/098822 CN2020098822W WO2021042842A1 WO 2021042842 A1 WO2021042842 A1 WO 2021042842A1 CN 2020098822 W CN2020098822 W CN 2020098822W WO 2021042842 A1 WO2021042842 A1 WO 2021042842A1
Authority
WO
WIPO (PCT)
Prior art keywords
question
answer
user
preset
interview
Prior art date
Application number
PCT/CN2020/098822
Other languages
English (en)
French (fr)
Inventor
金培根
李炫�
徐晓松
刘喜声
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021042842A1 publication Critical patent/WO2021042842A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Definitions

  • This application relates to the field of artificial intelligence technology, in particular to an interview method, device, computer equipment and storage medium based on an AI interview system.
  • AI artificial intelligence interviews can reduce labor costs and have excellent development potential.
  • the traditional AI interview method can only provide the respondent with a determined questionnaire, and generate an interview report based on the respondent's answers to the questionnaire.
  • the inventor realizes that since the questionnaires answered by the respondent are predetermined, the people who adapt to it are also fixed and the scope of application is narrow. Therefore, for interviewers belonging to different groups of people, different questionnaires need to be provided, which requires too much resources and cost. Because there are malicious interviewers or interviewers with poor psychological quality among the answerers, and traditional technical solutions either do not consider these aspects or are screened by senior interviewers, traditional technical solutions cannot effectively eliminate abnormal interviews. By.
  • the main purpose of this application is to provide an interview method, device, computer equipment, and storage medium based on the AI interview system. It aims to provide a questionnaire that can be flexibly jumped, and to ensure the quality of the answers to the questionnaire, while eliminating abnormal interviewers.
  • this application proposes an interview method based on the AI interview system, which includes the following steps:
  • the intention recognition model is trained based on sample data composed of the first question, the answer to the first question, and the intention associated with the answer;
  • the camera of the user terminal to collect the facial image of the user corresponding to the user terminal, and input the facial image into a preset micro-expression recognition model based on neural network model training for calculation, thereby obtaining the user's facial image
  • the nervousness value wherein the micro-expression recognition model is trained based on sample data composed of a face image and the nervousness value associated with the face image;
  • the user’s nervous emotion value is within the preset emotion range value, then according to the intention of the user terminal, according to the corresponding relationship between the preset intention and the question node, it is obtained in the tree-shaped question chain with the The second question connected to the first question;
  • an interview report is generated, where the interview report includes the first question, the first answer, the second question, and the second answer.
  • this application provides an interview device based on the AI interview system, including:
  • the first question output unit is configured to output a preset first question to the user end, where the first question is the root question of a tree-shaped question chain generated in advance;
  • the intention acquisition unit is configured to receive the first answer from the user terminal to the first question, and input the first answer into a preset intention recognition model based on machine learning model training, so as to obtain the user End intention, wherein the intention recognition model is trained based on sample data composed of the first question, an answer to the first question, and an intention associated with the answer;
  • the nervousness value acquiring unit is used to turn on the camera of the client terminal to collect the facial image of the user corresponding to the client terminal, and input the facial image into a preset micro-expression recognition model based on neural network model training. Calculation to obtain the nervousness value of the user, wherein the micro-expression recognition model is trained based on sample data composed of a face image and the nervousness value associated with the face image;
  • the nervousness value judging unit is used to judge whether the nervousness value of the user is within the preset emotional range value
  • the second question acquisition unit is configured to, if the tension value of the user is within the preset emotional range value, according to the intention of the user terminal, according to the correspondence between the preset intention and the question node, obtain the The second question connected to the first question in the tree-like question chain;
  • a second question output unit configured to output the second question to the user terminal
  • a termination judgment unit configured to receive a second answer from the user terminal to the second question, and judge whether the second question or the second answer meets a preset trigger condition for terminating the interview;
  • the interview report generating unit is configured to generate an interview report if the second question or the second answer meets the preset triggering conditions for terminating the interview, wherein the interview report includes the first question, the first answer, and the first answer. Second question and second answer.
  • the present application provides a computer device, including a memory and a processor, the memory stores computer-readable instructions, and when the processor executes the computer-readable instructions, an interview method based on an AI interview system is implemented , wherein the interview method based on the AI interview system includes the following steps:
  • the intention recognition model is trained based on sample data composed of the first question, the answer to the first question, and the intention associated with the answer;
  • the camera of the user terminal to collect the facial image of the user corresponding to the user terminal, and input the facial image into a preset micro-expression recognition model based on neural network model training for calculation, thereby obtaining the user's facial image
  • the nervousness value wherein the micro-expression recognition model is trained based on sample data composed of a face image and the nervousness value associated with the face image;
  • the user’s nervous emotion value is within the preset emotion range value, then according to the intention of the user terminal, according to the corresponding relationship between the preset intention and the question node, it is obtained in the tree-shaped question chain with the The second question connected to the first question;
  • an interview report is generated, where the interview report includes the first question, the first answer, the second question, and the second answer.
  • this application provides a computer-readable storage medium on which computer-readable instructions are stored, and when the computer-readable instructions are executed by a processor, an interview method based on an AI interview system is implemented, wherein the The interview method based on the AI interview system includes the following steps:
  • the intention recognition model is trained based on sample data composed of the first question, the answer to the first question, and the intention associated with the answer;
  • the camera of the user terminal to collect the facial image of the user corresponding to the user terminal, and input the facial image into a preset micro-expression recognition model based on neural network model training for calculation, thereby obtaining the user's facial image
  • the nervousness value wherein the micro-expression recognition model is trained based on sample data composed of a face image and the nervousness value associated with the face image;
  • the user’s nervous emotion value is within the preset emotion range value, then according to the intention of the user terminal, according to the corresponding relationship between the preset intention and the question node, it is obtained in the tree-shaped question chain with the The second question connected to the first question;
  • an interview report is generated, where the interview report includes the first question, the first answer, the second question, and the second answer.
  • the interview method, device, computer equipment, and storage medium of this application based on the AI interview system realizes the provision of a questionnaire that can be flexibly jumped, guarantees the quality of the questionnaire answer, and eliminates abnormal interviewers.
  • FIG. 1 is a schematic flowchart of an interview method based on an AI interview system according to an embodiment of the application
  • FIG. 2 is a schematic block diagram of the structure of an interview device based on an AI interview system according to an embodiment of the application;
  • FIG. 3 is a schematic block diagram of the structure of a computer device according to an embodiment of the application.
  • an embodiment of the application provides an interview method based on an AI interview system, which includes the following steps:
  • S2 Receive the first answer from the user end to the first question, and input the first answer into a preset intent recognition model based on machine learning model training, so as to obtain the user end's intent, where ,
  • the intention recognition model is trained based on sample data composed of the first question, the answer to the first question, and the intention associated with the answer;
  • S4 Determine whether the user's nervousness value is within a preset emotional range value
  • S7 Receive a second answer from the user end to the second question, and determine whether the second question or the second answer meets a preset trigger condition for terminating the interview;
  • an interview report is generated, where the interview report includes the first question, the first answer, the second question, and the second question. answer.
  • the preset first question is output to the user terminal, where the first question is the root question of the pre-generated tree-like question chain.
  • the AI interview system of this application is used to conduct interviews with the corresponding users on the client side.
  • the user terminal may be a terminal owned by the user. The terminal establishes a connection with the AI interview system and becomes the user terminal after obtaining the permission of the AI interview system.
  • the user terminal may also be an output terminal of the AI interview system, and the user can accept the interview of the AI interview system through the output terminal.
  • the tree-shaped problem chain refers to a tree-shaped network composed of multiple problems, wherein the first problem is the first node (that is, the root problem), so the first problem is connected with at least two other problems.
  • This application outputs one of the questions connected to the first question to the user terminal according to the true intention of the user terminal to answer the first question, so as to ensure targeted and flexible provision of suitable questions to ensure the quality of the interview.
  • step S2 receiving the first answer from the user terminal to the first question, and inputting the first answer into a preset intent recognition model based on machine learning model training, so as to obtain the The intention of the user side, wherein the intention recognition model is trained based on sample data composed of the first question, an answer to the first question, and an intention associated with the answer.
  • the user's first answer to the first question can be in different forms, for example, how is the first question about you?
  • the corresponding answer is: I am 40 years old, have a monthly salary of 30,000, graduate from a certain university, and work experience includes employment in a certain company.
  • Traditional technology can only record this information mechanically, but cannot obtain the true intention of the user from it.
  • the first answer is input into a preset intent recognition model based on machine learning model training, so as to obtain the intent of the user end, wherein the intent recognition model is based on the first question and responds to the The answer to the first question and the sample data composed of the intent associated with the answer are trained.
  • the machine learning model can be any model, such as a CHAID decision tree model.
  • the CHAID decision tree model will classify and process according to the specific information in the first answer to determine What kind of intention the first answer of the user belongs to, for example, whether the user intends to find a job (that is, whether the user is really seeking employment), the salary level of the user's intention, and so on.
  • the intent recognition model based on machine learning model training may also be an intent recognition model based on deep learning text classification models (for example, TextCNN, TextRNN, etc.).
  • deep learning is a branch of machine learning.
  • the TextCNN is a large-scale text classification network, which refers to the application of a convolutional neural network CNN to a text classification task, which uses multiple kernels of different sizes to extract key information in a sentence, thereby making the classification more accurate (this application Make intent recognition more accurate).
  • the TextRNN is a large-scale text classification network, which refers to the application of a recurrent neural network RNN to a text classification task, that is, applying a deep neural network in time to a text classification task, so as to make the classification more accurate (this application uses Intent recognition is more accurate).
  • intent recognition models trained based on deep learning text classification models (such as TextCNN, TextRNN, etc.) also need to be trained.
  • the training set consists of the first question, the answer to the first question, and the The answer is composed of sample data associated with the intended composition.
  • the neural network model can be VGG16 model, VGG19 model, VGG-F model, ResNet152 model, ResNet50 model, DPN131 model, IXception model, AlexNet model and DenseNet model, etc.
  • the DPN model is preferred.
  • this application uses a micro-expression recognition model to perform calculations on the user's facial image, so as to obtain the user's nervousness value.
  • step S4 it is determined whether the user's nervousness value is within a preset emotional range value.
  • the preset emotional range value represents the emotional value that a user should have under a normal interview state. If the user’s nervous emotional value is within the preset emotional range value, it indicates that the user’s interview state is normal, and therefore the user The first answer of the user is credible; the user’s nervousness value is not within the preset emotional range value, indicating that the user’s interview state is abnormal (the user’s first answer may be a lie, or the user’s psychological quality is not Good, not a good interviewer).
  • step S5 if the user’s nervous emotion value is within the preset emotion range value, then according to the user’s intention and the corresponding relationship between the preset intention and the problem node, the The second question in the state question chain is connected to the first question. If the nervousness value of the user is within the preset emotional range value, it indicates that the interview state of the user is normal, and therefore the first answer of the user is credible, and therefore the second question should be generated. Among them, there are multiple questions connected to the first question. The corresponding relationship between the intention and the question node has been preset in the preset tree-like question chain.
  • different second questions will be obtained, such as When the salary level of the user's intention is three different intentions (high, medium and low) (wherein, if the salary level of the user's intention is a specific salary number, the salary number is entered into the intention recognition model, because the intention recognition model Is already trained, so the intention recognition model will output the salary level of the user’s intention according to the input specific salary figures, where the salary level of the user’s intention includes three types: high, medium and low), from the tree question The corresponding second question is obtained in the chain according to the three different intentions.
  • the second question is output to the user terminal.
  • the second question is a question for further interviewing the user corresponding to the user terminal, and the user terminal should be required to answer.
  • the condition for terminating the interview can be any condition, for example: receiving the second answer from the user terminal to the second question, and judging whether the second question is at the end node of the tree-shaped question chain; if said If the second question is at the end node of the tree-like question chain, it is determined that the second question meets the preset triggering condition for terminating the interview; or, receiving the second answer from the user terminal to the second question, and determining Whether the second answer includes a preset closing keyword; if the second answer includes a preset closing keyword, it is determined that the second answer meets a preset trigger condition for terminating the interview.
  • an interview report is generated, wherein the interview report includes the first question, the first answer, and the first answer.
  • Second question and second answer If the second question or the second answer meets the preset trigger conditions for terminating the interview, it indicates that the AI interview can be ended, and an interview report is generated accordingly, where the interview report includes the first question and the first answer , The second question and the second answer.
  • the interview report may also include other content, for example, including the user's personal information, the score of the user by the AI interview, and so on.
  • the second question or the second answer does not meet the preset triggering condition for terminating the interview, the question is continued to be output to the user until the output question or the answer corresponding to the user meets the preset triggering condition for terminating the interview.
  • step S1 before the step S1, it includes:
  • S01 Acquire characteristic information of a user corresponding to the user terminal, where the characteristic information includes at least professional information of the user;
  • This application presets multiple initial tree-shaped problem chains, and different initial tree-shaped problem chains have different application ranges.
  • the initial tree-shaped problem chain for developers is different from the initial tree-shaped problem chain for financial personnel.
  • this application adopts to obtain the characteristic information of the user corresponding to the user terminal, the characteristic information includes at least the occupation information of the user; according to the characteristic information, according to the preset labeling rules, Mark the user with multiple tags; filter out the final tree-shaped problem chain from a plurality of pre-stored initial tree-shaped problem chains, wherein the final tree-shaped problem chain and the user have the most identical tags to determine the same
  • the tags include occupation tags, income tags, academic qualification tags, and the like.
  • the machine learning model is a CHAID decision tree model
  • the method includes:
  • the CHAID decision tree model refers to the decision tree model that uses the chi-square automatic interactive detection method CHAID.
  • the answer to the first question contains a variety of information about the respondent. For example, how is the first question about you? The corresponding answer is: I am 40 years old, have a monthly salary of 30,000, graduate from a certain university, and work experience includes employment in a certain company. Therefore, the corresponding answer can be used as an information basis for the classification of the decision tree.
  • here is a brief introduction to the principles of CHAID decision trees: 1. Combine group values that have insignificant effects on decision variables within the group; 2. Select the variable with the largest chi-square value as the tree classification variable; 3. Repeat steps 1 and 2.
  • the modeling standard parameters of the intention recognition model can be preset, such as the maximum number of layers of the decision tree, the subdivision significance level of the parent node, the minimum number of samples contained in the parent node, and the minimum number of samples contained in the child node.
  • the maximum number of layers of the decision tree is, for example, 3-5 layers
  • the subdivision significance level of the parent node is 0.05-0.1
  • the minimum number of samples contained in the parent node is 100-200
  • the minimum number of samples contained in the child node is 50-100.
  • the intention recognition model is the user's intention, such as whether the user intends to find a job (that is, whether the user is really seeking a job) or the like.
  • the step S3 includes:
  • S304 Calculate the facial region by using the positions of the eye region and the mouth region in the initial image according to a preset facial geometric structure ratio, and use an image within the facial region as a facial image;
  • the area recognition processing on the initial image is implemented to recognize the facial area, and the image within the facial area is used as the facial image, so that the result of the micro-expression recognition model is more accurate.
  • the eye image data is standard image data that can be used to identify eye features (for example, the data of the human eye image area collected in advance)
  • the mouth image data is the standard image data that can be used to identify mouth features (for example, the data collected in advance)
  • the data of the mouth image area of the person), the image data is, for example, image pixels (three primary colors, etc.).
  • the specific method for comparing image data can be any traditional comparison method, which is not described here.
  • the eye area is larger than a single divided area, then multiple consecutive areas whose difference does not exceed a preset value are regarded as the eye area; similarly, multiple consecutive areas whose difference does not exceed a preset value are taken as the eye area.
  • the nose area Since the five sense organs in a human face are distributed according to a certain geometric structure ratio, if the eye area and the mouth area are determined, the approximate facial contours can be obtained. Accordingly, according to the preset geometric structure ratio of the face, the position of the eye area and the mouth area in the initial image is used to calculate the face area, and the image within the face area is used as the face image. Then input the facial image into a preset micro-expression recognition model based on neural network model training to perform calculations, so as to obtain the nervousness value of the user.
  • step S3 before the step S3, it includes:
  • S21 Acquire a specified number of sample data, and divide the sample data into a training set and a test set; wherein the sample data includes a face image and a micro-expression category associated with the face image;
  • the micro-expression recognition model is set.
  • This embodiment is based on a neural network model to train a micro-expression recognition model.
  • the neural network model can be VGG16 model, VGG19 model, VGG-F model, ResNet152 model, ResNet50 model, DPN131 model, IXception model, AlexNet model and DenseNet model, etc.
  • the DPN model is preferred.
  • the stochastic gradient descent method is to randomly sample some training data to replace the entire training set. If the sample size is large (for example, hundreds of thousands), then only tens of thousands or thousands of samples may be used, and iterative When the optimal solution is reached, the training speed can be improved.
  • the training process can also use the reverse conduction rule to update the parameters of each layer of the neural network model.
  • the reverse conduction law is based on the gradient descent method.
  • the input-output relationship of the BP network is essentially a mapping relationship: the function of a BP neural network with n inputs and m outputs is from n-dimensional Euclidean A continuous mapping from space to a finite field in m-dimensional Euclidean space. This mapping is highly non-linear, which is conducive to the update of the parameters of each layer of the neural network model.
  • the sample data of the test set is then used to verify the initial micro-expression recognition model, and if the verification is passed, the initial micro-expression recognition model is recorded as the micro-expression recognition model.
  • the step S7 includes:
  • the second answer to the second question is received from the user terminal, and it is determined whether the second question or the second answer meets the preset trigger condition for terminating the interview.
  • This application uses two methods to determine whether the interview can end, one is to judge based on the second question, and the other is to judge based on the second answer. Specifically, if the second question is at the end node of the tree-shaped question chain, it indicates that there are no new questions after the second question, so it is determined that the second question meets the preset trigger condition for terminating the interview . If the second answer includes the preset ending keyword, it means that the user intends to end the interview, and therefore it is determined that the second answer meets the preset trigger condition for terminating the interview.
  • the preset ending keywords are, for example, giving up the interview, not participating in the interview, etc., so as to give the user the opportunity to terminate the interview, so as not to waste the user's time, energy, and resources of the AI interview system.
  • the method includes:
  • the third question is obtained and sent to the user terminal. If the second question or the second answer does not meet the preset trigger conditions for terminating the interview, it indicates that the interview has not been terminated, and therefore the third question needs to be generated again. Accordingly, a recognition intention instruction is generated, and the recognition intention instruction is used to indicate that the user's intention is recognized from the second answer.
  • the identification of the user's intention from the second answer may be any manner, including but not limited to the same manner as the aforementioned identification of the user's intention from the first answer. Then, according to the recognizing intention instruction, the user’s intention identified from the second answer is acquired, and the correspondence between the predetermined intention and the question node is acquired in the tree-like question chain.
  • the second question is connected to the third question, and the third question is sent to the user terminal.
  • the micro-expression recognition model since the micro-expression recognition model has been used to obtain the user's emotional value, there is no need to use the micro-expression recognition model again to obtain the user's emotional value before the third question is generated.
  • the micro-expression recognition model can also be used again to obtain the user's nervousness value, and to determine whether the nervousness value is within the preset emotional range value, so as to judge again Whether the user is in a normal state.
  • an embodiment of the application provides an interview device based on an AI interview system, including:
  • the first question output unit 10 is configured to output a preset first question to a user terminal, where the first question is the root question of a tree-shaped question chain generated in advance;
  • the intention acquisition unit 20 is configured to receive the first answer from the user terminal to the first question, and input the first answer into a preset intention recognition model based on machine learning model training, so as to obtain the The intention of the user side, wherein the intention recognition model is trained based on sample data composed of the first question, the answer to the first question, and the intention associated with the answer;
  • the nervousness value acquiring unit 30 is configured to turn on the camera of the client terminal to collect the facial image of the user corresponding to the client terminal, and input the facial image into a preset micro-expression recognition model based on neural network model training Performing calculations to obtain the nervousness value of the user, wherein the micro-expression recognition model is trained based on a face image and sample data composed of nervousness values associated with the face image;
  • the nervousness value judging unit 40 is used to judge whether the nervousness value of the user is within a preset emotional range value
  • the second question acquisition unit 50 is configured to, if the tension value of the user is within a preset emotional range value, according to the intention of the user terminal, according to the correspondence between the preset intention and the question node, obtain the current The second question connected to the first question in the tree-like question chain;
  • the second question output unit 60 is configured to output the second question to the user terminal
  • the termination judgment unit 70 is configured to receive a second answer from the user terminal to the second question, and judge whether the second question or the second answer meets a preset trigger condition for terminating the interview;
  • the interview report generating unit 80 is configured to generate an interview report if the second question or the second answer meets a preset trigger condition for terminating the interview, wherein the interview report includes the first question, the first answer, Second question and second answer.
  • the AI interview system prestores a plurality of initial tree-shaped question chains, and the initial tree-shaped question chains are marked with a plurality of tags, and the device includes:
  • the characteristic information acquiring unit is configured to acquire characteristic information of the user corresponding to the user terminal, where the characteristic information includes at least occupational information of the user;
  • the label marking unit is configured to mark a plurality of labels on the user according to the characteristic information and according to a preset label marking rule;
  • the question chain screening unit is configured to filter out a final tree-shaped question chain from a plurality of pre-stored initial tree-shaped question chains, wherein the final tree-shaped question chain and the user have the most identical tags;
  • the problem chain sending unit is configured to use the final tree-shaped problem chain as the tree-shaped problem chain to be sent to the user end.
  • the machine learning model is a CHAID decision tree model
  • the device includes:
  • the sample data acquisition unit is used to acquire a specified amount of sample data, and divide the sample data into a training set and a test set; wherein the sample data consists of the first question, the answer to the first question, and the Construed by the intent of the answer to the association;
  • the training unit is used to input the sample data of the training set into the CHAID decision tree model for training to obtain a preliminary CHAID decision tree;
  • a verification unit configured to verify the preliminary CHAID decision tree by using the sample data of the test set
  • the marking unit is configured to record the preliminary CHAID decision tree as the intention recognition model if the verification is passed.
  • the stress value acquiring unit 30 includes:
  • An image collection subunit configured to turn on the camera of the user terminal to collect multiple initial images of the user corresponding to the terminal, wherein the initial image includes at least the face of the user corresponding to the terminal;
  • the area division subunit is used to divide the initial image into multiple areas, compare the image data of each area with the preset eye image data, and obtain the difference between the image data of each area and the eye image data, and The area where the difference does not exceed the preset value is recorded as the eye area;
  • the mouth area acquisition subunit is used to compare the image data of each area with the preset mouth image data to obtain the difference between the image data of each area and the mouth image data, and record the area where the difference does not exceed the preset value. Is the mouth area;
  • the eye area acquisition subunit is used to calculate the face area by using the positions of the eye area and the mouth area in the initial image according to a preset geometric structure ratio of the face, and calculate the facial area within the range of the face area.
  • the image is used as a facial image;
  • the nervousness value acquiring subunit is used to input the facial image into a preset micro-expression recognition model based on neural network model training to perform calculations to obtain the nervousness value of the user, wherein the micro-expression
  • the recognition model is trained based on a face image and sample data composed of nervous emotion values associated with the face image.
  • the device includes:
  • the training data acquisition unit is used to acquire a specified number of sample data, and divide the sample data into a training set and a test set; wherein the sample data includes a face image and a micro-expression category associated with the face image;
  • the initial micro-expression recognition model training unit is used to input the sample data of the training set into the preset neural network model for training to obtain the initial micro-expression recognition model, where the stochastic gradient descent method is used in the training process;
  • An initial micro-expression recognition model verification unit configured to verify the initial micro-expression recognition model by using sample data of the test set
  • the micro-expression recognition model acquiring unit is configured to record the initial micro-expression recognition model as the micro-expression recognition model if the verification is passed.
  • the termination judgment unit 70 includes:
  • An end node judging subunit for receiving the second answer from the user terminal to the second question, and judging whether the second question is at the end node of the tree-shaped question chain;
  • the first termination determination subunit is configured to determine that the second question meets a preset trigger condition for terminating the interview if the second question is at the end node of the tree-shaped question chain;
  • the ending keyword judgment subunit is used to alternatively receive the second answer from the user terminal to the second question, and determine whether the second answer includes a preset ending keyword;
  • the second termination determination subunit is configured to determine that the second answer meets a preset triggering condition for terminating the interview if the second answer includes a preset end keyword.
  • the device includes:
  • a recognition intention instruction unit for generating a recognition intention instruction if the second question or the second answer does not meet a preset trigger condition for terminating the interview, the recognition intention instruction being used to instruct the second answer Identify the intention of the client in
  • the intention recognition unit is configured to obtain the user’s intention recognized from the second answer according to the recognition intention instruction, and obtain the question in the tree according to the preset correspondence between the intention and the question node The third question in the chain connected to the second question;
  • the third question sending unit is configured to send the third question to the user terminal.
  • an embodiment of the present application also provides a computer device.
  • the computer device may be a server, and its internal structure may be as shown in the figure.
  • the computer equipment includes a processor, a memory, a network interface, and a database connected through a system bus. Among them, the processor designed by the computer is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, computer readable instructions, and a database.
  • the memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium.
  • the database of the computer equipment is used to store data used in the interview method based on the AI interview system.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer-readable instructions are executed by the processor to implement the interview method based on the AI interview system shown in any of the above embodiments.
  • the steps for the processor to execute the interview method based on the AI interview system include:
  • the intention recognition model is trained based on sample data composed of the first question, the answer to the first question, and the intention associated with the answer;
  • the camera of the user terminal to collect the facial image of the user corresponding to the user terminal, and input the facial image into a preset micro-expression recognition model based on neural network model training for calculation, thereby obtaining the user's facial image
  • the nervousness value wherein the micro-expression recognition model is trained based on sample data composed of a face image and the nervousness value associated with the face image;
  • the user’s nervous emotion value is within the preset emotion range value, then according to the intention of the user terminal, according to the corresponding relationship between the preset intention and the question node, it is obtained in the tree-shaped question chain with the The second question connected to the first question;
  • an interview report is generated, where the interview report includes the first question, the first answer, the second question, and the second answer.
  • An embodiment of the present application also provides a computer-readable storage medium.
  • the computer-readable storage medium may be non-volatile or volatile, and computer-readable instructions are stored thereon, and the computer-readable instructions are processed.
  • the interview method based on the AI interview system shown in any of the above embodiments is implemented when the device is executed, wherein the interview method based on the AI interview system includes:
  • the intention recognition model is trained based on sample data composed of the first question, the answer to the first question, and the intention associated with the answer;
  • the camera of the user terminal to collect the facial image of the user corresponding to the user terminal, and input the facial image into a preset micro-expression recognition model based on neural network model training for calculation, thereby obtaining the user's facial image
  • the nervousness value wherein the micro-expression recognition model is trained based on sample data composed of a face image and the nervousness value associated with the face image;
  • the user’s nervous emotion value is within the preset emotion range value, then according to the intention of the user terminal, according to the corresponding relationship between the preset intention and the question node, it is obtained in the tree-shaped question chain with the The second question connected to the first question;
  • an interview report is generated, where the interview report includes the first question, the first answer, the second question, and the second answer.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual-rate data rate SDRAM (SSRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Image Analysis (AREA)

Abstract

本申请涉及人工智能技术领域,揭示了一种基于AI面试系统的面试方法、装置、计算机设备和存储介质,所述方法包括:向用户端输出预设的第一问题;接收所述用户端回复所述第一问题的第一答案,获得所述用户端的意图;采集所述用户端对应的用户的面部图像,并将所述面部图像输入到预设的基于神经网络模型训练完成的微表情识别模型中进行运算,从而得到所述用户的紧张情绪值;若所述用户的紧张情绪值处于预设的情绪范围值之内,则获取第二问题;向所述用户端输出所述第二问题;接收所述用户端回复所述第二问题的第二答案;若符合预设的终止面试触发条件,则生成面试报告。实现了提供能够灵活跳转的问卷、并保证问卷答案的质量,同时淘汰非正常面试者。

Description

基于AI面试系统的面试方法、装置和计算机设备
本申请要求于2019年09月06日提交中国专利局、申请号为201910843465.6,发明名称为“基于AI面试系统的面试方法、装置和计算机设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及到人工智能技术领域,特别是涉及到一种基于AI面试系统的面试方法、装置、计算机设备和存储介质。
背景技术
AI(人工智能)面试能够减少人工成本,具有优秀的发展潜力。传统的AI面试方法,只能向答题者提供一份已确定的问卷,根据答题者回答所述问卷的答案生成面试报告。发明人意识到,由于答题者回答的问卷是预先确定的,因此适应的人群也是固定的,适用面窄,因此对于属于不同人群的面试者,需要提供不同的问卷,需要耗费过多的资源与成本。由于答题者中存在恶意面试者或者心理素质不佳的面试者,而传统的技术方案或者不考虑这些方面,或者是由资深面试官进行筛选,因此传统技术的方案无法做到有效淘汰非正常面试者。
技术问题
本申请的主要目的为提供一种基于AI面试系统的面试方法、装置、计算机设备和存储介质,旨在提供能够灵活跳转的问卷、并保证问卷答案的质量,同时淘汰非正常面试者。
技术解决方案
为了实现上述发明目的,第一方面,本申请提出一种基于AI面试系统的面试方法,包括以下步骤:
向用户端输出预设的第一问题,其中所述第一问题是预先生成的树状问题链的根问题;
接收所述用户端回复所述第一问题的第一答案,将所述第一答案输入预设的基于机器学习模型训练完成的意图识别模型中运算,从而获得所述用户端的意图,其中,所述意图识别模型基于所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图组成的样本数据训练而成;
打开所述用户端的摄像头采集所述用户端对应的用户的面部图像,并将所述面部图像输入到预设的基于神经网络模型训练完成的微表情识别模型中进行运算,从而得到所述用户的紧张情绪值,其中,所述微表情识别模型基于人脸图像,以及与所述人脸图像关联的紧张情绪值组成的样本数据训练而成;
判断所述用户的紧张情绪值是否处于预设的情绪范围值之内;
若所述用户的紧张情绪值处于预设的情绪范围值之内,则根据所述用户端的意图,按照预设的意图与问题节点的对应关系,获取在所述树状问题链中与所述第一问题相连接的第二问题;
向所述用户端输出所述第二问题;
接收所述用户端回复所述第二问题的第二答案,并判断所述第二问题或者所述第二答案是否符合预设的终止面试触发条件;
若所述第二问题或者所述第二答案符合预设的终止面试触发条件,则生成面试报告,其中所述面试报告包括所述第一问题、第一答案、第二问题和第二答案。
第二方面,本申请提供一种基于AI面试系统的面试装置,包括:
第一问题输出单元,用于向用户端输出预设的第一问题,其中所述第一问题是预先生成的树状问题链的根问题;
意图获取单元,用于接收所述用户端回复所述第一问题的第一答案,将所述第一答案输入预设的基于机器学习模型训练完成的意图识别模型中运算,从而获得所述用户端的意图,其中,所述意图识别模型基于所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图组成的样本数据训练而成;
紧张情绪值获取单元,用于打开所述用户端的摄像头采集所述用户端对应的用户的面部图像,并将所述面部图像输入到预设的基于神经网络模型训练完成的微表情识别模型中进行运算,从而得到所述用户的紧张情绪值,其中,所述微表情识别模型基于人脸图像,以及与所述人脸图像关联的紧张情绪值组成的样本数据训练而成;
紧张情绪值判断单元,用于判断所述用户的紧张情绪值是否处于预设的情绪范围值之内;
第二问题获取单元,用于若所述用户的紧张情绪值处于预设的情绪范围值之内,则根据所述用户端的意图,按照预设的意图与问题节点的对应关系,获取在所述树状问题链中与所述第一问题相连接的第二问题;
第二问题输出单元,用于向所述用户端输出所述第二问题;
终止判断单元,用于接收所述用户端回复所述第二问题的第二答案,并判断所述第二问题或者所述第二答案是否符合预设的终止面试触发条件;
面试报告生成单元,用于若所述第二问题或者所述第二答案符合预设的终止面试触发条件,则生成 面试报告,其中所述面试报告包括所述第一问题、第一答案、第二问题和第二答案。
第三方面,本申请提供一种计算机设备,包括存储器和处理器,所述存储器存储有计算机可读指令,所述处理器执行所述计算机可读指令时实现一种基于AI面试系统的面试方法,其中,所述基于AI面试系统的面试方法包括以下步骤:
向用户端输出预设的第一问题,其中所述第一问题是预先生成的树状问题链的根问题;
接收所述用户端回复所述第一问题的第一答案,将所述第一答案输入预设的基于机器学习模型训练完成的意图识别模型中运算,从而获得所述用户端的意图,其中,所述意图识别模型基于所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图组成的样本数据训练而成;
打开所述用户端的摄像头采集所述用户端对应的用户的面部图像,并将所述面部图像输入到预设的基于神经网络模型训练完成的微表情识别模型中进行运算,从而得到所述用户的紧张情绪值,其中,所述微表情识别模型基于人脸图像,以及与所述人脸图像关联的紧张情绪值组成的样本数据训练而成;
判断所述用户的紧张情绪值是否处于预设的情绪范围值之内;
若所述用户的紧张情绪值处于预设的情绪范围值之内,则根据所述用户端的意图,按照预设的意图与问题节点的对应关系,获取在所述树状问题链中与所述第一问题相连接的第二问题;
向所述用户端输出所述第二问题;
接收所述用户端回复所述第二问题的第二答案,并判断所述第二问题或者所述第二答案是否符合预设的终止面试触发条件;
若所述第二问题或者所述第二答案符合预设的终止面试触发条件,则生成面试报告,其中所述面试报告包括所述第一问题、第一答案、第二问题和第二答案。
第四方面,本申请提供一种计算机可读存储介质,其上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现一种基于AI面试系统的面试方法,其中,所述基于AI面试系统的面试方法包括以下步骤:
向用户端输出预设的第一问题,其中所述第一问题是预先生成的树状问题链的根问题;
接收所述用户端回复所述第一问题的第一答案,将所述第一答案输入预设的基于机器学习模型训练完成的意图识别模型中运算,从而获得所述用户端的意图,其中,所述意图识别模型基于所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图组成的样本数据训练而成;
打开所述用户端的摄像头采集所述用户端对应的用户的面部图像,并将所述面部图像输入到预设的基于神经网络模型训练完成的微表情识别模型中进行运算,从而得到所述用户的紧张情绪值,其中,所述微表情识别模型基于人脸图像,以及与所述人脸图像关联的紧张情绪值组成的样本数据训练而成;
判断所述用户的紧张情绪值是否处于预设的情绪范围值之内;
若所述用户的紧张情绪值处于预设的情绪范围值之内,则根据所述用户端的意图,按照预设的意图与问题节点的对应关系,获取在所述树状问题链中与所述第一问题相连接的第二问题;
向所述用户端输出所述第二问题;
接收所述用户端回复所述第二问题的第二答案,并判断所述第二问题或者所述第二答案是否符合预设的终止面试触发条件;
若所述第二问题或者所述第二答案符合预设的终止面试触发条件,则生成面试报告,其中所述面试报告包括所述第一问题、第一答案、第二问题和第二答案。
有益效果
本申请的基于AI面试系统的面试方法、装置、计算机设备和存储介质,实现了提供能够灵活跳转的问卷、并保证问卷答案的质量,同时淘汰非正常面试者。
附图说明
图1为本申请一实施例的基于AI面试系统的面试方法的流程示意图;
图2为本申请一实施例的基于AI面试系统的面试装置的结构示意框图;
图3为本申请一实施例的计算机设备的结构示意框图。
本发明的最佳实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
参照图1,本申请实施例提供一种基于AI面试系统的面试方法,包括以下步骤:
S1、向用户端输出预设的第一问题,其中所述第一问题是预先生成的树状问题链的根问题;
S2、接收所述用户端回复所述第一问题的第一答案,将所述第一答案输入预设的基于机器学习模型训练完成的意图识别模型中运算,从而获得所述用户端的意图,其中,所述意图识别模型基于所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图组成的样本数据训练而成;
S3、打开所述用户端的摄像头采集所述用户端对应的用户的面部图像,并将所述面部图像输入到预 设的基于神经网络模型训练完成的微表情识别模型中进行运算,从而得到所述用户的紧张情绪值,其中,所述微表情识别模型基于人脸图像,以及与所述人脸图像关联的紧张情绪值组成的样本数据训练而成;
S4、判断所述用户的紧张情绪值是否处于预设的情绪范围值之内;
S5、若所述用户的紧张情绪值处于预设的情绪范围值之内,则根据所述用户端的意图,按照预设的意图与问题节点的对应关系,获取在所述树状问题链中与所述第一问题相连接的第二问题;
S6、向所述用户端输出所述第二问题;
S7、接收所述用户端回复所述第二问题的第二答案,并判断所述第二问题或者所述第二答案是否符合预设的终止面试触发条件;
S8、若所述第二问题或者所述第二答案符合预设的终止面试触发条件,则生成面试报告,其中所述面试报告包括所述第一问题、第一答案、第二问题和第二答案。
如上述步骤S1所述,向用户端输出预设的第一问题,其中所述第一问题是预先生成的树状问题链的根问题。本申请的AI面试系统用于向用户端对应的用户进行面试。所述用户端可以为用户所拥有的终端,所述终端通过与所述AI面试系统建立连接,在得到AI面试系统许可后成为用户端。所述用户端也可以为AI面试系统的一个输出端,用户通过所述输出端即可接受所述AI面试系统的面试。所述树状问题链是指由多个问题构成的树状网络,其中所述第一问题是第一个节点(即根问题),因此所述第一问题至少连接有两个其他问题。本申请根据用户端回答所述第一问题的真实意图,将与第一问题连接的其中一个问题输出给所述用户端,以保证针对性、灵活地提供合适的问题,以保证面试质量。
如上述步骤S2所述,接收所述用户端回复所述第一问题的第一答案,将所述第一答案输入预设的基于机器学习模型训练完成的意图识别模型中运算,从而获得所述用户端的意图,其中,所述意图识别模型基于所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图组成的样本数据训练而成。由于用户端回复第一问题的第一答案可以有不同的形式,例如第一问题为你的近况如何?相应回答为:我的年龄为40岁、月薪为3万、毕业院校为某大学、工作经历包括在某企业就职等。传统技术仅能机械记录这些信息,却无法从中获取所述用户的真实意图。本申请将所述第一答案输入预设的基于机器学习模型训练完成的意图识别模型中运算,从而获得所述用户端的意图,其中,所述意图识别模型基于所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图组成的样本数据训练而成。其中所述机器学习模型可为任意模型,例如为CHAID决策树模型,将第一答案输入所述CHAID决策树模型后,CHAID决策树模型将根据第一答案中的具体信息进行分类处理,从而确定所述用户的第一答案属于哪种意图,例如为用户是否意图就职(即用户是否是真的谋求就职),用户意图的薪资水平等。进一步地,所述基于机器学习模型训练完成的意图识别模型还可以为基于深度学习文本分类的模型(例如TextCNN,TextRNN等)训练完成的意图识别模型。其中深度学习是机器学习的一个分支。其中所述TextCNN是一种大规模文本分类网络,是指将卷积神经网络CNN应用到文本分类任务,利用多个不同尺寸的内核来提取句子中的关键信息,从而使分类更准确(本申请中使意图识别更准确)。所述TextRNN是一种大规模文本分类网络,是指将循环神经网络RNN应用到文本分类任务,即将在时间上深度的神经网络应用至文本分类任务中,从而使分类更准确(本申请中使意图识别更准确)。同样的,基于深度学习文本分类的模型(例如TextCNN,TextRNN等)训练完成的意图识别模型也需要进行训练得到,其训练集由所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图组成的样本数据构成。
如上述步骤S3所述,打开所述用户端的摄像头采集所述用户端对应的用户的面部图像,并将所述面部图像输入到预设的基于神经网络模型训练完成的微表情识别模型中进行运算,从而得到所述用户的紧张情绪值,其中,所述微表情识别模型基于人脸图像,以及与所述人脸图像关联的紧张情绪值组成的样本数据训练而成。其中神经网络模型可为VGG16模型、VGG19模型、VGG-F模型、ResNet152模型、ResNet50模型、DPN131模型、IXception模型、AlexNet模型和DenseNet模型等,优选DPN模型。由于人的面部微表情能够反应人的情绪,当情绪异常时,可能表示人处于恶意状态(例如意图造假而通过AI面试)。因此本申请采用微表情识别模型对用户的面部图像进行运算,从而得到所述用户的紧张情绪值。
如上述步骤S4所述,判断所述用户的紧张情绪值是否处于预设的情绪范围值之内。预设的情绪范围值代表了处于正常面试状态下的用户应具有的情绪值,若所述用户的紧张情绪值处于预设的情绪范围值之内,表明所述用户的面试状态正常,因此用户的第一答案可信;所述用户的紧张情绪值不处于预设的情绪范围值之内,表明所述用户的面试状态不正常(用户的第一答案可能为谎言,或者用户的心理素质不佳,不是优秀的面试者)。
如上述步骤S5所述,若所述用户的紧张情绪值处于预设的情绪范围值之内,则根据所述用户端的意图,按照预设的意图与问题节点的对应关系,获取在所述树状问题链中与所述第一问题相连接的第二问题。若所述用户的紧张情绪值处于预设的情绪范围值之内,表明所述用户的面试状态正常,因此用户的第一答案可信,因此应当生成第二问题。其中与第一问题相连接的问题有多个,在预设的树状问题链 中已经预设了意图与问题节点的对应关系,因此根据不同的用户意图,将获取不同的第二问题,例如当用户意图的薪资水平为高中低三个不同意图时(其中,若用户意图的薪资水平为具体的薪资数字时,通过将所述薪资数字输入所述意图识别模型中,由于所述意图识别模型是已经训练好的,因此所述意图识别模型将根据输入的具体的薪资数字输出用户意图的薪资水平,其中用户意图的薪资水平包括高、中和低三种),将从所述树状问题链中按照所述三个不同意图获取对应的第二问题。
如上述步骤S6所述,向所述用户端输出所述第二问题。所述第二问题是进一步面试所述用户端对应用户的问题,应当要求所述用户端进行回答。
如上述步骤S7所述,接收所述用户端回复所述第二问题的第二答案,并判断所述第二问题或者所述第二答案是否符合预设的终止面试触发条件。其中终止面试触条件可为任意条件,例如:接收所述用户端回复所述第二问题的第二答案,并判断所述第二问题是否处于所述树状问题链的末端节点;若所述第二问题处于所述树状问题链的末端节点,则判定所述第二问题符合预设的终止面试触发条件;或者,接收所述用户端回复所述第二问题的第二答案,并判断所述第二答案是否包括预设的结束关键词语;若所述第二答案包括预设的结束关键词语,则判定所述第二答案符合预设的终止面试触发条件。
如上述步骤S8所述,若所述第二问题或者所述第二答案符合预设的终止面试触发条件,则生成面试报告,其中所述面试报告包括所述第一问题、第一答案、第二问题和第二答案。若所述第二问题或者所述第二答案符合预设的终止面试触发条件,表明本次AI面试可以结束,据此生成面试报告,其中所述面试报告包括所述第一问题、第一答案、第二问题和第二答案。进一步地,所述面试报告还可以包括其他内容,例如包括用户的个人信息,AI面试对用户的评分等。进一步地,若所述第二问题或者所述第二答案不符合预设的终止面试触发条件,则继续向用户输出问题,直至输出的问题或者用户对应的答案符合预设的终止面试触发条件。
在一个实施方式中,所述步骤S1之前,包括:
S01、获取所述用户端对应的用户的特征信息,所述特征信息至少包括所述用户的职业信息;
S02、根据所述特征信息,按照预设的标签标记规则,对所述用户标记多个标签;
S03、从预存的多个初始树状问题链中筛选出最终树状问题链,其中所述最终树状问题链与所述用户具有最多的相同标签;
S04、将所述最终树状问题链作为将发送给所述用户端的树状问题链。
如上所述,实现了获得树状问题链。本申请预设有多个初始树状问题链,不同的初始树状问题链适用范围不同,例如针对开发人员的初始树状问题链与针对财务人员的初始树状问题链不同。为了更准确地进行AI面试,本申请采用获取所述用户端对应的用户的特征信息,所述特征信息至少包括所述用户的职业信息;根据所述特征信息,按照预设的标签标记规则,对所述用户标记多个标签;从预存的多个初始树状问题链中筛选出最终树状问题链,其中所述最终树状问题链与所述用户具有最多的相同标签的方式,确定与所述用户最匹配的树状问题链,以提高AI面试的质量。其中所述标签包括职业标签、收入标签、学历标签等。
在一个实施方式中,所述机器学习模型为CHAID决策树模型,所述步骤S2之前,包括:
S11、获取指定量的样本数据,并将样本数据分成训练集和测试集;其中,所述样本数据由所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图所构成;
S12、将训练集的样本数据输入到CHAID决策树模型中进行训练,得到初步CHAID决策树;
S13、利用所述测试集的样本数据验证所述初步CHAID决策树;
S14、如果验证通过,则将所述初步CHAID决策树记为所述意图识别模型。
如上所述,实现了获得意图识别模型。其中CHAID决策树模型指采用卡方自动交互检测法CHAID的决策树模型。其中对所述第一问题的回答中包含了答题者的多种信息,例如第一问题为你的近况如何?相应回答为:我的年龄为40岁、月薪为3万、毕业院校为某大学、工作经历包括在某企业就职等。从而所述相应回答可作为决策树分类的信息依据。其中,在此简单介绍CHAID决策树的原理:1、合并组内对决策变量影响差别不显著的组值;2、选取卡方值最大的变量作为树分类变量;3、重复1、2步骤,至不能选取卡方值大于某值或样本小于某数。其中,可预先设所述意图识别模型的建模标准参数,例如设置决策树的最大层数、母节点可再分的显著水平、母节点包含的最小样本数、子节点包含的最小样本数。其中,决策树的最大层数例如为3-5层、母节点可再分的显著水平为0.05-0.1、母节点包含的最小样本数100-200、子节点包含的最小样本数50-100。采用训练集集的样本数据训练得到初步CHAID决策树,再用测试集的样本进行验证,若通过,则记为所述意图识别模型。其中,所述意图识别模型输出的结果为用户的意图,例如为用户是否意图就职(即用户是否是真的谋求就职)等。
在一个实施方式中,所述步骤S3,包括:
S301、打开所述用户端的摄像头采集所述终端对应的用户的多幅初始图像,其中所述初始图像至少包括所述终端对应的用户的面部;
S302、将所述初始图像划分为多个区域,将每个区域的图像数据与预设的眼睛图像数据进行对比,得到每个区域图像数据与眼睛图像数据的差值,将差值不超过预设数值的区域记为眼睛区域;
S303、将每个区域的图像数据与预设的嘴巴图像数据进行比较,得到每个区域图像数据与嘴巴图像数据的差值,将差值不超过预设数值的区域记为嘴巴区域;
S304、根据预设的面部几何结构比例,利用所述眼睛区域与所述嘴巴区域在所述初始图像中的位置,计算出面部区域,并将所述面部区域范围内的图像作为面部图像;
S305、将所述面部图像输入到预设的基于神经网络模型训练完成的微表情识别模型中进行运算,从而得到所述用户的紧张情绪值,其中,所述微表情识别模型基于人脸图像,以及与所述人脸图像关联的紧张情绪值组成的样本数据训练而成。
如上所述,实现了对所述初始图像进行区域识别处理从而识别出面部区域,并将所述面部区域范围内的图像作为面部图像,从而使微表情识别模型的结果更准确。其中,眼睛图像数据为标准的可用于标识眼睛特征的图像数据(例如预先采集得到的人的眼睛图像区域的数据),嘴巴图像数据为标准的可用于标识嘴巴特征的图像数据(例如预先采集得到的人的嘴巴图像区域的数据),所述图像数据例如为图像像素(三原色等)等。而具体比对图像数据的方法可采用任意的传统比较方式,在此不赘述。进一步地,若所述眼睛区域大于划分的单个区域,则以差值不超过预设数值的多个连续的区域为眼睛区域;同理,以差值不超过预设数值的多个连续的区域为鼻子区域。由于人的面部中的五官是按一定的几何结构比例分布的,若确定眼睛区域与嘴巴区域,即可获知大致的面部轮廓。据此,根据预设的面部几何结构比例,利用所述眼睛区域与所述嘴巴区域在所述初始图像中的位置,计算出面部区域,并将所述面部区域范围内的图像作为面部图像。再将所述面部图像输入到预设的基于神经网络模型训练完成的微表情识别模型中进行运算,从而得到所述用户的紧张情绪值。
在一个实施方式中,所述步骤S3之前,包括:
S21、获取指定数量的样本数据,并将样本数据分成训练集和测试集;其中,所述样本数据包括人脸图像,以及与所述人脸图像关联的微表情类别;
S22、将训练集的样本数据输入到预设的神经网络模型中进行训练,得到初始微表情识别模型,其中,训练的过程中采用随机梯度下降法;
S23、利用测试集的样本数据验证所述初始微表情识别模型;
S24、若验证通过,则将所述初始微表情识别模型记为所述微表情识别模型。
如上所述,实现了设置微表情识别模型。本实施方式基于神经网络模型以训练出微表情识别模型。其中神经网络模型可为VGG16模型、VGG19模型、VGG-F模型、ResNet152模型、ResNet50模型、DPN131模型、IXception模型、AlexNet模型和DenseNet模型等,优选DPN模型。其中,随机梯度下降法就是随机取样一些训练数据,替代整个训练集,如果样本量很大的情况(例如几十万),那么可能只用其中几万条或者几千条的样本,就已经迭代到最优解了,可以提高训练速度。进一步地,训练的过程还可以采用反向传导法则更新所述神经网络模型各层的参数。其中反向传导法则(BP)建立在梯度下降法的基础上,BP网络的输入输出关系实质上是一种映射关系:一个n输入m输出的BP神经网络所完成的功能是从n维欧氏空间向m维欧氏空间中一有限域的连续映射,这一映射具有高度非线性,有利于神经网络模型各层的参数的更新。从而获得初始微表情识别模型。再利用测试集的样本数据验证所述初始微表情识别模型,若验证通过,则将所述初始微表情识别模型记为所述微表情识别模型。
在一个实施方式中,所述步骤S7,包括:
S701、接收所述用户端回复所述第二问题的第二答案,并判断所述第二问题是否处于所述树状问题链的末端节点;
S702、若所述第二问题处于所述树状问题链的末端节点,则判定所述第二问题符合预设的终止面试触发条件;
S703、或者,接收所述用户端回复所述第二问题的第二答案,并判断所述第二答案是否包括预设的结束关键词语;
S704、若所述第二答案包括预设的结束关键词语,则判定所述第二答案符合预设的终止面试触发条件。
如上所述,实现了接收所述用户端回复所述第二问题的第二答案,并判断所述第二问题或者所述第二答案是否符合预设的终止面试触发条件。本申请采用两种方式判断面试可以结束,一种是根据第二问题进行判断,一种是根据第二答案进行判断。具体地,若所述第二问题处于所述树状问题链的末端节点,表明所述第二问题之后,不再有新的问题,因此判定所述第二问题符合预设的终止面试触发条件。若所述第二答案包括预设的结束关键词语,表示用户意图结止本次面试,因此判定所述第二答案符合预设的终止面试触发条件。其中所述预设的结束关键词语例如为:放弃面试、不参加面试等,从而给予用户本身终止面试的机会,以免浪费用户的时间、精力与本AI面试系统的资源。
在一个实施方式中,所述步骤S7之后,包括:
S71、若所述第二问题或者所述第二答案不符合预设的终止面试触发条件,则生成识别意图指令,所述识别意图指令用于指示从所述第二答案中识别出所述用户端的意图;
S72、根据所述识别意图指令,获取从所述第二答案中识别出的所述用户端的意图,并按照预设的意图与问题节点的对应关系,获取在所述树状问题链中与所述第二问题相连接的第三问题;
S73、将所述第三问题发送给所述用户端。
如上所述,实现了获取第三问题,并发送给所述用户端。若所述第二问题或者所述第二答案不符合预设的终止面试触发条件,表明面试并未终止,因此需要再次生成第三问题。据此,生成识别意图指令,所述识别意图指令用于指示从所述第二答案中识别出所述用户端的意图。其中从所述第二答案中识别出所述用户端的意图可以为任意方式,包括且不限于与采用前述从第一答案中识别出所述用户端的意图相同的方式。再根据所述识别意图指令,获取从所述第二答案中识别出的所述用户端的意图,并按照预设的意图与问题节点的对应关系,获取在所述树状问题链中与所述第二问题相连接的第三问题,将所述第三问题发送给所述用户端。其中,由于已采用微表情识别模型获取过用户的情绪值,因此在生成第三问题之前可以不用再次利用微表情识别模型以获取所述用户的紧张情绪值。当然,进一步地,在生成第三问题之前也可以再次利用微表情识别模型以获取所述用户的紧张情绪值,并判断所述紧张情绪值是否处于预设的情绪范围值之内,从而再次判断用户是否处于正常状态。
参照图2,本申请实施例提供一种基于AI面试系统的面试装置,包括:
第一问题输出单元10,用于向用户端输出预设的第一问题,其中所述第一问题是预先生成的树状问题链的根问题;
意图获取单元20,用于接收所述用户端回复所述第一问题的第一答案,将所述第一答案输入预设的基于机器学习模型训练完成的意图识别模型中运算,从而获得所述用户端的意图,其中,所述意图识别模型基于所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图组成的样本数据训练而成;
紧张情绪值获取单元30,用于打开所述用户端的摄像头采集所述用户端对应的用户的面部图像,并将所述面部图像输入到预设的基于神经网络模型训练完成的微表情识别模型中进行运算,从而得到所述用户的紧张情绪值,其中,所述微表情识别模型基于人脸图像,以及与所述人脸图像关联的紧张情绪值组成的样本数据训练而成;
紧张情绪值判断单元40,用于判断所述用户的紧张情绪值是否处于预设的情绪范围值之内;
第二问题获取单元50,用于若所述用户的紧张情绪值处于预设的情绪范围值之内,则根据所述用户端的意图,按照预设的意图与问题节点的对应关系,获取在所述树状问题链中与所述第一问题相连接的第二问题;
第二问题输出单元60,用于向所述用户端输出所述第二问题;
终止判断单元70,用于接收所述用户端回复所述第二问题的第二答案,并判断所述第二问题或者所述第二答案是否符合预设的终止面试触发条件;
面试报告生成单元80,用于若所述第二问题或者所述第二答案符合预设的终止面试触发条件,则生成面试报告,其中所述面试报告包括所述第一问题、第一答案、第二问题和第二答案。
其中上述单元分别用于执行的操作与前述实施方式的基于AI面试系统的面试方法的步骤一一对应,在此不再赘述。
在一个实施方式中,所述AI面试系统预存有多个初始树状问题链,所述初始树状问题链标记有多个标签,所述装置,包括:
特征信息获取单元,用于获取所述用户端对应的用户的特征信息,所述特征信息至少包括所述用户的职业信息;
标签标记单元,用于根据所述特征信息,按照预设的标签标记规则,对所述用户标记多个标签;
问题链筛选单元,用于从预存的多个初始树状问题链中筛选出最终树状问题链,其中所述最终树状问题链与所述用户具有最多的相同标签;
问题链发送单元,用于将所述最终树状问题链作为将发送给所述用户端的树状问题链。
其中上述单元分别用于执行的操作与前述实施方式的基于AI面试系统的面试方法的步骤一一对应,在此不再赘述。
在一个实施方式中,所述机器学习模型为CHAID决策树模型,所述装置,包括:
样本数据获取单元,用于获取指定量的样本数据,并将样本数据分成训练集和测试集;其中,所述样本数据由所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图所构成;
训练单元,用于将训练集的样本数据输入到CHAID决策树模型中进行训练,得到初步CHAID决策树;
验证单元,用于利用所述测试集的样本数据验证所述初步CHAID决策树;
标记单元,用于如果验证通过,则将所述初步CHAID决策树记为所述意图识别模型。
其中上述单元分别用于执行的操作与前述实施方式的基于AI面试系统的面试方法的步骤一一对应,在此不再赘述。
在一个实施方式中,所述紧张情绪值获取单元30,包括:
图像采集子单元,用于打开所述用户端的摄像头采集所述终端对应的用户的多幅初始图像,其中所述初始图像至少包括所述终端对应的用户的面部;
区域划分子单元,用于将所述初始图像划分为多个区域,将每个区域的图像数据与预设的眼睛图像数据进行对比,得到每个区域图像数据与眼睛图像数据的差值,将差值不超过预设数值的区域记为眼睛区域;
嘴巴区域获取子单元,用于将每个区域的图像数据与预设的嘴巴图像数据进行比较,得到每个区域图像数据与嘴巴图像数据的差值,将差值不超过预设数值的区域记为嘴巴区域;
眼睛区域获取子单元,用于根据预设的面部几何结构比例,利用所述眼睛区域与所述嘴巴区域在所述初始图像中的位置,计算出面部区域,并将所述面部区域范围内的图像作为面部图像;
紧张情绪值获取子单元,用于将所述面部图像输入到预设的基于神经网络模型训练完成的微表情识别模型中进行运算,从而得到所述用户的紧张情绪值,其中,所述微表情识别模型基于人脸图像,以及与所述人脸图像关联的紧张情绪值组成的样本数据训练而成。
其中上述子单元分别用于执行的操作与前述实施方式的基于AI面试系统的面试方法的步骤一一对应,在此不再赘述。
在一个实施方式中,所述装置,包括:
训练数据获取单元,用于获取指定数量的样本数据,并将样本数据分成训练集和测试集;其中,所述样本数据包括人脸图像,以及与所述人脸图像关联的微表情类别;
初始微表情识别模型训练单元,用于将训练集的样本数据输入到预设的神经网络模型中进行训练,得到初始微表情识别模型,其中,训练的过程中采用随机梯度下降法;
初始微表情识别模型验证单元,用于利用测试集的样本数据验证所述初始微表情识别模型;
微表情识别模型获取单元,用于若验证通过,则将所述初始微表情识别模型记为所述微表情识别模型。
其中上述单元分别用于执行的操作与前述实施方式的基于AI面试系统的面试方法的步骤一一对应,在此不再赘述。
在一个实施方式中,所述终止判断单元70,包括:
末端节点判断子单元,用于接收所述用户端回复所述第二问题的第二答案,并判断所述第二问题是否处于所述树状问题链的末端节点;
第一终止判定子单元,用于若所述第二问题处于所述树状问题链的末端节点,则判定所述第二问题符合预设的终止面试触发条件;
结束关键词语判断子单元,用于或者,接收所述用户端回复所述第二问题的第二答案,并判断所述第二答案是否包括预设的结束关键词语;
第二终止判定子单元,用于若所述第二答案包括预设的结束关键词语,则判定所述第二答案符合预设的终止面试触发条件。
其中上述子单元分别用于执行的操作与前述实施方式的基于AI面试系统的面试方法的步骤一一对应,在此不再赘述。
在一个实施方式中,所述装置,包括:
生成识别意图指令单元,用于若所述第二问题或者所述第二答案不符合预设的终止面试触发条件,则生成识别意图指令,所述识别意图指令用于指示从所述第二答案中识别出所述用户端的意图;
意图识别单元,用于根据所述识别意图指令,获取从所述第二答案中识别出的所述用户端的意图,并按照预设的意图与问题节点的对应关系,获取在所述树状问题链中与所述第二问题相连接的第三问题;
第三问题发送单元,用于将所述第三问题发送给所述用户端。
其中上述单元分别用于执行的操作与前述实施方式的基于AI面试系统的面试方法的步骤一一对应,在此不再赘述。
参照图3,本申请实施例中还提供一种计算机设备,该计算机设备可以是服务器,其内部结构可以如图所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设计的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机可读指令和数据库。该内存器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的数据库用于存储基于AI面试系统的面试方法所用数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理 器执行时以实现上述的任一实施例所示出的基于AI面试系统的面试方法。
上述处理器执行上述基于AI面试系统的面试方法的步骤包括:
向用户端输出预设的第一问题,其中所述第一问题是预先生成的树状问题链的根问题;
接收所述用户端回复所述第一问题的第一答案,将所述第一答案输入预设的基于机器学习模型训练完成的意图识别模型中运算,从而获得所述用户端的意图,其中,所述意图识别模型基于所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图组成的样本数据训练而成;
打开所述用户端的摄像头采集所述用户端对应的用户的面部图像,并将所述面部图像输入到预设的基于神经网络模型训练完成的微表情识别模型中进行运算,从而得到所述用户的紧张情绪值,其中,所述微表情识别模型基于人脸图像,以及与所述人脸图像关联的紧张情绪值组成的样本数据训练而成;
判断所述用户的紧张情绪值是否处于预设的情绪范围值之内;
若所述用户的紧张情绪值处于预设的情绪范围值之内,则根据所述用户端的意图,按照预设的意图与问题节点的对应关系,获取在所述树状问题链中与所述第一问题相连接的第二问题;
向所述用户端输出所述第二问题;
接收所述用户端回复所述第二问题的第二答案,并判断所述第二问题或者所述第二答案是否符合预设的终止面试触发条件;
若所述第二问题或者所述第二答案符合预设的终止面试触发条件,则生成面试报告,其中所述面试报告包括所述第一问题、第一答案、第二问题和第二答案。
本领域技术人员可以理解,图中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定。
本申请一实施例还提供一种计算机可读存储介质,所述计算机可读存储介质可以是非易失性,也可以是易失性,其上存储有计算机可读指令,计算机可读指令被处理器执行时实现上述的任一实施例所示出的基于AI面试系统的面试方法,其中,所述基于AI面试系统的面试方法包括:
向用户端输出预设的第一问题,其中所述第一问题是预先生成的树状问题链的根问题;
接收所述用户端回复所述第一问题的第一答案,将所述第一答案输入预设的基于机器学习模型训练完成的意图识别模型中运算,从而获得所述用户端的意图,其中,所述意图识别模型基于所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图组成的样本数据训练而成;
打开所述用户端的摄像头采集所述用户端对应的用户的面部图像,并将所述面部图像输入到预设的基于神经网络模型训练完成的微表情识别模型中进行运算,从而得到所述用户的紧张情绪值,其中,所述微表情识别模型基于人脸图像,以及与所述人脸图像关联的紧张情绪值组成的样本数据训练而成;
判断所述用户的紧张情绪值是否处于预设的情绪范围值之内;
若所述用户的紧张情绪值处于预设的情绪范围值之内,则根据所述用户端的意图,按照预设的意图与问题节点的对应关系,获取在所述树状问题链中与所述第一问题相连接的第二问题;
向所述用户端输出所述第二问题;
接收所述用户端回复所述第二问题的第二答案,并判断所述第二问题或者所述第二答案是否符合预设的终止面试触发条件;
若所述第二问题或者所述第二答案符合预设的终止面试触发条件,则生成面试报告,其中所述面试报告包括所述第一问题、第一答案、第二问题和第二答案。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的和实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可以包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双速据率SDRAM(SSRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上所述仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种基于AI面试系统的面试方法,其中,包括:
    向用户端输出预设的第一问题,其中所述第一问题是预先生成的树状问题链的根问题;
    接收所述用户端回复所述第一问题的第一答案,将所述第一答案输入预设的基于机器学习模型训练完成的意图识别模型中运算,从而获得所述用户端的意图,其中,所述意图识别模型基于所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图组成的样本数据训练而成;
    打开所述用户端的摄像头采集所述用户端对应的用户的面部图像,并将所述面部图像输入到预设的基于神经网络模型训练完成的微表情识别模型中进行运算,从而得到所述用户的紧张情绪值,其中,所述微表情识别模型基于人脸图像,以及与所述人脸图像关联的紧张情绪值组成的样本数据训练而成;
    判断所述用户的紧张情绪值是否处于预设的情绪范围值之内;
    若所述用户的紧张情绪值处于预设的情绪范围值之内,则根据所述用户端的意图,按照预设的意图与问题节点的对应关系,获取在所述树状问题链中与所述第一问题相连接的第二问题;
    向所述用户端输出所述第二问题;
    接收所述用户端回复所述第二问题的第二答案,并判断所述第二问题或者所述第二答案是否符合预设的终止面试触发条件;
    若所述第二问题或者所述第二答案符合预设的终止面试触发条件,则生成面试报告,其中所述面试报告包括所述第一问题、第一答案、第二问题和第二答案。
  2. 根据权利要求1所述的基于AI面试系统的面试方法,其中,所述AI面试系统预存有多个初始树状问题链,所述初始树状问题链标记有多个标签,所述向用户端输出预设的第一问题,其中所述第一问题是预先生成的树状问题链的根问题的步骤之前,包括:
    获取所述用户端对应的用户的特征信息,所述特征信息至少包括所述用户的职业信息;
    根据所述特征信息,按照预设的标签标记规则,对所述用户标记多个标签;
    从预存的多个初始树状问题链中筛选出最终树状问题链,其中所述最终树状问题链与所述用户具有最多的相同标签;
    将所述最终树状问题链作为将发送给所述用户端的树状问题链。
  3. 根据权利要求1所述的基于AI面试系统的面试方法,其中,所述机器学习模型为CHAID决策树模型,所述接收所述用户端回复所述第一问题的第一答案,将所述第一答案输入预设的基于机器学习模型训练完成的意图识别模型中运算,从而获得所述用户端的意图,其中,所述意图识别模型基于所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图组成的样本数据训练而成的步骤之前,包括:
    获取指定量的样本数据,并将样本数据分成训练集和测试集;其中,所述 样本数据由所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图所构成;
    将训练集的样本数据输入到CHAID决策树模型中进行训练,得到初步CHAID决策树;
    利用所述测试集的样本数据验证所述初步CHAID决策树;
    如果验证通过,则将所述初步CHAID决策树记为所述意图识别模型。
  4. 根据权利要求1所述的基于AI面试系统的面试方法,其中,所述打开所述用户端的摄像头采集所述用户端对应的用户的面部图像,并将所述面部图像输入到预设的基于神经网络模型训练完成的微表情识别模型中进行运算,从而得到所述用户的紧张情绪值,其中,所述微表情识别模型基于人脸图像,以及与所述人脸图像关联的紧张情绪值组成的样本数据训练而成的步骤,包括:
    打开所述用户端的摄像头采集所述终端对应的用户的多幅初始图像,其中所述初始图像至少包括所述终端对应的用户的面部;
    将所述初始图像划分为多个区域,将每个区域的图像数据与预设的眼睛图像数据进行对比,得到每个区域图像数据与眼睛图像数据的差值,将差值不超过预设数值的区域记为眼睛区域;
    将每个区域的图像数据与预设的嘴巴图像数据进行比较,得到每个区域图像数据与嘴巴图像数据的差值,将差值不超过预设数值的区域记为嘴巴区域;
    根据预设的面部几何结构比例,利用所述眼睛区域与所述嘴巴区域在所述初始图像中的位置,计算出面部区域,并将所述面部区域范围内的图像作为面部图像;
    将所述面部图像输入到预设的基于神经网络模型训练完成的微表情识别模型中进行运算,从而得到所述用户的紧张情绪值,其中,所述微表情识别模型基于人脸图像,以及与所述人脸图像关联的紧张情绪值组成的样本数据训练而成。
  5. 根据权利要求1所述的基于AI面试系统的面试方法,其中,所述打开所述用户端的摄像头采集所述用户端对应的用户的面部图像,并将所述面部图像输入到预设的基于神经网络模型训练完成的微表情识别模型中进行运算,从而得到所述用户的紧张情绪值,其中,所述微表情识别模型基于人脸图像,以及与所述人脸图像关联的紧张情绪值组成的样本数据训练而成的步骤之前,包括:
    获取指定数量的样本数据,并将样本数据分成训练集和测试集;其中,所述样本数据包括人脸图像,以及与所述人脸图像关联的微表情类别;
    将训练集的样本数据输入到预设的神经网络模型中进行训练,得到初始微表情识别模型,其中,训练的过程中采用随机梯度下降法;
    利用测试集的样本数据验证所述初始微表情识别模型;
    若验证通过,则将所述初始微表情识别模型记为所述微表情识别模型。
  6. 根据权利要求1所述的基于AI面试系统的面试方法,其中,所述接收所述用户端回复所述第二问题的第二答案,并判断所述第二问题或者所述第二 答案是否符合预设的终止面试触发条件的步骤,包括:
    接收所述用户端回复所述第二问题的第二答案,并判断所述第二问题是否处于所述树状问题链的末端节点;
    若所述第二问题处于所述树状问题链的末端节点,则判定所述第二问题符合预设的终止面试触发条件;
    或者,接收所述用户端回复所述第二问题的第二答案,并判断所述第二答案是否包括预设的结束关键词语;
    若所述第二答案包括预设的结束关键词语,则判定所述第二答案符合预设的终止面试触发条件。
  7. 根据权利要求1所述的基于AI面试系统的面试方法,其中,所述接收所述用户端回复所述第二问题的第二答案,并判断所述第二问题或者所述第二答案是否符合预设的终止面试触发条件的步骤之后,包括:
    若所述第二问题或者所述第二答案不符合预设的终止面试触发条件,则生成识别意图指令,所述识别意图指令用于指示从所述第二答案中识别出所述用户端的意图;
    根据所述识别意图指令,获取从所述第二答案中识别出的所述用户端的意图,并按照预设的意图与问题节点的对应关系,获取在所述树状问题链中与所述第二问题相连接的第三问题;
    将所述第三问题发送给所述用户端。
  8. 一种基于AI面试系统的面试装置,其中,包括:
    第一问题输出单元,用于向用户端输出预设的第一问题,其中所述第一问题是预先生成的树状问题链的根问题;
    意图获取单元,用于接收所述用户端回复所述第一问题的第一答案,将所述第一答案输入预设的基于机器学习模型训练完成的意图识别模型中运算,从而获得所述用户端的意图,其中,所述意图识别模型基于所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图组成的样本数据训练而成;
    紧张情绪值获取单元,用于打开所述用户端的摄像头采集所述用户端对应的用户的面部图像,并将所述面部图像输入到预设的基于神经网络模型训练完成的微表情识别模型中进行运算,从而得到所述用户的紧张情绪值,其中,所述微表情识别模型基于人脸图像,以及与所述人脸图像关联的紧张情绪值组成的样本数据训练而成;
    紧张情绪值判断单元,用于判断所述用户的紧张情绪值是否处于预设的情绪范围值之内;
    第二问题获取单元,用于若所述用户的紧张情绪值处于预设的情绪范围值之内,则根据所述用户端的意图,按照预设的意图与问题节点的对应关系,获取在所述树状问题链中与所述第一问题相连接的第二问题;
    第二问题输出单元,用于向所述用户端输出所述第二问题;
    终止判断单元,用于接收所述用户端回复所述第二问题的第二答案,并判断所述第二问题或者所述第二答案是否符合预设的终止面试触发条件;
    面试报告生成单元,用于若所述第二问题或者所述第二答案符合预设的终止面试触发条件,则生成面试报告,其中所述面试报告包括所述第一问题、第一答案、第二问题和第二答案。
  9. 根据权利要求8所述的基于AI面试系统的面试装置,其中,所述AI面试系统预存有多个初始树状问题链,所述初始树状问题链标记有多个标签,所述装置,包括:
    特征信息获取单元,用于获取所述用户端对应的用户的特征信息,所述特征信息至少包括所述用户的职业信息;
    标签标记单元,用于根据所述特征信息,按照预设的标签标记规则,对所述用户标记多个标签;
    问题链筛选单元,用于从预存的多个初始树状问题链中筛选出最终树状问题链,其中所述最终树状问题链与所述用户具有最多的相同标签;
    问题链发送单元,用于将所述最终树状问题链作为将发送给所述用户端的树状问题链。
  10. 根据权利要求8所述的基于AI面试系统的面试装置,其中,所述机器学习模型为CHAID决策树模型,所述装置,包括:
    样本数据获取单元,用于获取指定量的样本数据,并将样本数据分成训练集和测试集;其中,所述样本数据由所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图所构成;
    训练单元,用于将训练集的样本数据输入到CHAID决策树模型中进行训练,得到初步CHAID决策树;
    验证单元,用于利用所述测试集的样本数据验证所述初步CHAID决策树;
    标记单元,用于如果验证通过,则将所述初步CHAID决策树记为所述意图识别模型。
  11. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机可读指令,其中,所述处理器执行所述计算机可读指令时实现一种基于AI面试系统的面试方法:
    其中,所述基于AI面试系统的面试方法包括:
    向用户端输出预设的第一问题,其中所述第一问题是预先生成的树状问题链的根问题;
    接收所述用户端回复所述第一问题的第一答案,将所述第一答案输入预设的基于机器学习模型训练完成的意图识别模型中运算,从而获得所述用户端的意图,其中,所述意图识别模型基于所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图组成的样本数据训练而成;
    打开所述用户端的摄像头采集所述用户端对应的用户的面部图像,并将所述面部图像输入到预设的基于神经网络模型训练完成的微表情识别模型中进行运算,从而得到所述用户的紧张情绪值,其中,所述微表情识别模型基于人脸图像,以及与所述人脸图像关联的紧张情绪值组成的样本数据训练而成;
    判断所述用户的紧张情绪值是否处于预设的情绪范围值之内;
    若所述用户的紧张情绪值处于预设的情绪范围值之内,则根据所述用户端的意图,按照预设的意图与问题节点的对应关系,获取在所述树状问题链中与所述第一问题相连接的第二问题;
    向所述用户端输出所述第二问题;
    接收所述用户端回复所述第二问题的第二答案,并判断所述第二问题或者所述第二答案是否符合预设的终止面试触发条件;
    若所述第二问题或者所述第二答案符合预设的终止面试触发条件,则生成面试报告,其中所述面试报告包括所述第一问题、第一答案、第二问题和第二答案。
  12. 根据权利要求11所述的计算机设备,其中,所述AI面试系统预存有多个初始树状问题链,所述初始树状问题链标记有多个标签,所述向用户端输出预设的第一问题,其中所述第一问题是预先生成的树状问题链的根问题的步骤之前,包括:
    获取所述用户端对应的用户的特征信息,所述特征信息至少包括所述用户的职业信息;
    根据所述特征信息,按照预设的标签标记规则,对所述用户标记多个标签;
    从预存的多个初始树状问题链中筛选出最终树状问题链,其中所述最终树状问题链与所述用户具有最多的相同标签;
    将所述最终树状问题链作为将发送给所述用户端的树状问题链。
  13. 根据权利要求11所述的计算机设备,其中,所述机器学习模型为CHAID决策树模型,所述接收所述用户端回复所述第一问题的第一答案,将所述第一答案输入预设的基于机器学习模型训练完成的意图识别模型中运算,从而获得所述用户端的意图,其中,所述意图识别模型基于所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图组成的样本数据训练而成的步骤之前,包括:
    获取指定量的样本数据,并将样本数据分成训练集和测试集;其中,所述样本数据由所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图所构成;
    将训练集的样本数据输入到CHAID决策树模型中进行训练,得到初步CHAID决策树;
    利用所述测试集的样本数据验证所述初步CHAID决策树;
    如果验证通过,则将所述初步CHAID决策树记为所述意图识别模型。
  14. 根据权利要求11所述的计算机设备,其中,所述打开所述用户端的摄像头采集所述用户端对应的用户的面部图像,并将所述面部图像输入到预设的基于神经网络模型训练完成的微表情识别模型中进行运算,从而得到所述用户的紧张情绪值,其中,所述微表情识别模型基于人脸图像,以及与所述人脸图像关联的紧张情绪值组成的样本数据训练而成的步骤之前,包括:
    获取指定数量的样本数据,并将样本数据分成训练集和测试集;其中,所述样本数据包括人脸图像,以及与所述人脸图像关联的微表情类别;
    将训练集的样本数据输入到预设的神经网络模型中进行训练,得到初始微表情识别模型,其中,训练的过程中采用随机梯度下降法;
    利用测试集的样本数据验证所述初始微表情识别模型;
    若验证通过,则将所述初始微表情识别模型记为所述微表情识别模型。
  15. 根据权利要求11所述的计算机设备,其中,所述接收所述用户端回复所述第二问题的第二答案,并判断所述第二问题或者所述第二答案是否符合预设的终止面试触发条件的步骤之后,包括:
    若所述第二问题或者所述第二答案不符合预设的终止面试触发条件,则生成识别意图指令,所述识别意图指令用于指示从所述第二答案中识别出所述用户端的意图;
    根据所述识别意图指令,获取从所述第二答案中识别出的所述用户端的意图,并按照预设的意图与问题节点的对应关系,获取在所述树状问题链中与所述第二问题相连接的第三问题;
    将所述第三问题发送给所述用户端。
  16. 一种计算机可读存储介质,其上存储有计算机可读指令,其中,所述计算机可读指令被处理器执行时实现一种基于AI面试系统的面试方法,其中,所述基于AI面试系统的面试方法包括以下步骤:
    向用户端输出预设的第一问题,其中所述第一问题是预先生成的树状问题链的根问题;
    接收所述用户端回复所述第一问题的第一答案,将所述第一答案输入预设的基于机器学习模型训练完成的意图识别模型中运算,从而获得所述用户端的意图,其中,所述意图识别模型基于所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图组成的样本数据训练而成;
    打开所述用户端的摄像头采集所述用户端对应的用户的面部图像,并将所述面部图像输入到预设的基于神经网络模型训练完成的微表情识别模型中进行运算,从而得到所述用户的紧张情绪值,其中,所述微表情识别模型基于人脸图像,以及与所述人脸图像关联的紧张情绪值组成的样本数据训练而成;
    判断所述用户的紧张情绪值是否处于预设的情绪范围值之内;
    若所述用户的紧张情绪值处于预设的情绪范围值之内,则根据所述用户端的意图,按照预设的意图与问题节点的对应关系,获取在所述树状问题链中与所述第一问题相连接的第二问题;
    向所述用户端输出所述第二问题;
    接收所述用户端回复所述第二问题的第二答案,并判断所述第二问题或者所述第二答案是否符合预设的终止面试触发条件;
    若所述第二问题或者所述第二答案符合预设的终止面试触发条件,则生成面试报告,其中所述面试报告包括所述第一问题、第一答案、第二问题和第二答案。
  17. 根据权利要求16所述的计算机可读存储介质,其中,所述AI面试系统预存有多个初始树状问题链,所述初始树状问题链标记有多个标签,所述向 用户端输出预设的第一问题,其中所述第一问题是预先生成的树状问题链的根问题的步骤之前,包括:
    获取所述用户端对应的用户的特征信息,所述特征信息至少包括所述用户的职业信息;
    根据所述特征信息,按照预设的标签标记规则,对所述用户标记多个标签;
    从预存的多个初始树状问题链中筛选出最终树状问题链,其中所述最终树状问题链与所述用户具有最多的相同标签;
    将所述最终树状问题链作为将发送给所述用户端的树状问题链。
  18. 根据权利要求16所述的计算机可读存储介质,其中,所述机器学习模型为CHAID决策树模型,所述接收所述用户端回复所述第一问题的第一答案,将所述第一答案输入预设的基于机器学习模型训练完成的意图识别模型中运算,从而获得所述用户端的意图,其中,所述意图识别模型基于所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图组成的样本数据训练而成的步骤之前,包括:
    获取指定量的样本数据,并将样本数据分成训练集和测试集;其中,所述样本数据由所述第一问题、对所述第一问题的回答、以及与所述回答关联的意图所构成;
    将训练集的样本数据输入到CHAID决策树模型中进行训练,得到初步CHAID决策树;
    利用所述测试集的样本数据验证所述初步CHAID决策树;
    如果验证通过,则将所述初步CHAID决策树记为所述意图识别模型。
  19. 根据权利要求16所述的计算机可读存储介质,其中,所述打开所述用户端的摄像头采集所述用户端对应的用户的面部图像,并将所述面部图像输入到预设的基于神经网络模型训练完成的微表情识别模型中进行运算,从而得到所述用户的紧张情绪值,其中,所述微表情识别模型基于人脸图像,以及与所述人脸图像关联的紧张情绪值组成的样本数据训练而成的步骤之前,包括:
    获取指定数量的样本数据,并将样本数据分成训练集和测试集;其中,所述样本数据包括人脸图像,以及与所述人脸图像关联的微表情类别;
    将训练集的样本数据输入到预设的神经网络模型中进行训练,得到初始微表情识别模型,其中,训练的过程中采用随机梯度下降法;
    利用测试集的样本数据验证所述初始微表情识别模型;
    若验证通过,则将所述初始微表情识别模型记为所述微表情识别模型。
  20. 根据权利要求16所述的计算机可读存储介质,其中,所述接收所述用户端回复所述第二问题的第二答案,并判断所述第二问题或者所述第二答案是否符合预设的终止面试触发条件的步骤之后,包括:
    若所述第二问题或者所述第二答案不符合预设的终止面试触发条件,则生成识别意图指令,所述识别意图指令用于指示从所述第二答案中识别出所述用户端的意图;
    根据所述识别意图指令,获取从所述第二答案中识别出的所述用户端的意 图,并按照预设的意图与问题节点的对应关系,获取在所述树状问题链中与所述第二问题相连接的第三问题;
    将所述第三问题发送给所述用户端。
PCT/CN2020/098822 2019-09-06 2020-06-29 基于ai面试系统的面试方法、装置和计算机设备 WO2021042842A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910843465.6 2019-09-06
CN201910843465.6A CN110728182B (zh) 2019-09-06 2019-09-06 基于ai面试系统的面试方法、装置和计算机设备

Publications (1)

Publication Number Publication Date
WO2021042842A1 true WO2021042842A1 (zh) 2021-03-11

Family

ID=69217925

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/098822 WO2021042842A1 (zh) 2019-09-06 2020-06-29 基于ai面试系统的面试方法、装置和计算机设备

Country Status (2)

Country Link
CN (1) CN110728182B (zh)
WO (1) WO2021042842A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116757300A (zh) * 2023-07-03 2023-09-15 萍乡亦远科技服务有限公司 基于循环卷积网络的智能预订数据处理方法及系统

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110728182B (zh) * 2019-09-06 2023-12-26 平安科技(深圳)有限公司 基于ai面试系统的面试方法、装置和计算机设备
CN111540440B (zh) * 2020-04-23 2021-01-15 深圳市镜象科技有限公司 基于人工智能的心理体检方法、装置、设备和介质
CN114399827B (zh) * 2022-03-14 2022-08-09 潍坊护理职业学院 基于面部微表情的高校毕业生职业性格测试方法及系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018032164A (ja) * 2016-08-23 2018-03-01 株式会社ユニバーサルエンターテインメント 面接システム
CN109492854A (zh) * 2018-09-17 2019-03-19 平安科技(深圳)有限公司 智能机器人面试的方法、装置、计算机设备和存储介质
CN109670023A (zh) * 2018-12-14 2019-04-23 平安城市建设科技(深圳)有限公司 人机自动面试方法、装置、设备和存储介质
CN109767321A (zh) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 问答过程优化方法、装置、计算机设备和存储介质
CN109961052A (zh) * 2019-03-29 2019-07-02 上海大易云计算股份有限公司 一种基于表情分析技术的视频面试方法及系统
CN109993053A (zh) * 2019-01-23 2019-07-09 平安科技(深圳)有限公司 电子装置、基于微表情识别的访谈辅助方法和存储介质
CN110728182A (zh) * 2019-09-06 2020-01-24 平安科技(深圳)有限公司 基于ai面试系统的面试方法、装置和计算机设备

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633203A (zh) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 面部情绪识别方法、装置及存储介质
CN109766917A (zh) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 面试视频数据处理方法、装置、计算机设备和存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018032164A (ja) * 2016-08-23 2018-03-01 株式会社ユニバーサルエンターテインメント 面接システム
CN109492854A (zh) * 2018-09-17 2019-03-19 平安科技(深圳)有限公司 智能机器人面试的方法、装置、计算机设备和存储介质
CN109670023A (zh) * 2018-12-14 2019-04-23 平安城市建设科技(深圳)有限公司 人机自动面试方法、装置、设备和存储介质
CN109767321A (zh) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 问答过程优化方法、装置、计算机设备和存储介质
CN109993053A (zh) * 2019-01-23 2019-07-09 平安科技(深圳)有限公司 电子装置、基于微表情识别的访谈辅助方法和存储介质
CN109961052A (zh) * 2019-03-29 2019-07-02 上海大易云计算股份有限公司 一种基于表情分析技术的视频面试方法及系统
CN110728182A (zh) * 2019-09-06 2020-01-24 平安科技(深圳)有限公司 基于ai面试系统的面试方法、装置和计算机设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116757300A (zh) * 2023-07-03 2023-09-15 萍乡亦远科技服务有限公司 基于循环卷积网络的智能预订数据处理方法及系统
CN116757300B (zh) * 2023-07-03 2024-04-19 深圳市捷信达电子有限公司 基于循环卷积网络的智能预订数据处理方法及系统

Also Published As

Publication number Publication date
CN110728182A (zh) 2020-01-24
CN110728182B (zh) 2023-12-26

Similar Documents

Publication Publication Date Title
WO2021042842A1 (zh) 基于ai面试系统的面试方法、装置和计算机设备
Escalante et al. Modeling, recognizing, and explaining apparent personality from videos
WO2020253358A1 (zh) 业务数据的风控分析处理方法、装置和计算机设备
Pena et al. Bias in multimodal AI: Testbed for fair automatic recruitment
CN109272396B (zh) 客户风险预警方法、装置、计算机设备和介质
CN110569356B (zh) 基于智能面试交互系统的面试方法、装置和计算机设备
US10685329B2 (en) Model-driven evaluator bias detection
CN110175229B (zh) 一种基于自然语言进行在线培训的方法和系统
WO2021190086A1 (zh) 面审风险控制方法、装置、计算机设备及存储介质
CN109829692A (zh) 基于人工智能的合同审理方法、装置、设备及存储介质
CN109308319B (zh) 文本分类方法、文本分类装置和计算机可读存储介质
CN110895568B (zh) 处理庭审记录的方法和系统
CN113034044B (zh) 基于人工智能的面试方法、装置、设备及介质
WO2020042584A1 (zh) 非绩优人员培训方法、系统、计算机装置及存储介质
CN110858353B (zh) 获取案件裁判结果的方法和系统
CN115205764B (zh) 基于机器视觉的在线学习专注度监测方法、系统及介质
CN115641101A (zh) 一种智能化招聘的方法、装置及计算机可读介质
Liu Analysis on Class Participation Based on Artificial Intelligence.
CN113255843B (zh) 演讲稿测评方法及设备
US20210224588A1 (en) Recruitment process graph based unsupervised anomaly detection
Procaci et al. Modelling experts behaviour in Q&A communities to predict worthy discussions
CN114219419A (zh) 一种基于区块链智能合约的混凝土工程责任分担方法
CN113761217A (zh) 基于人工智能的题目集数据处理方法、装置和计算机设备
Morocho et al. Signature recognition: establishing human baseline performance via crowdsourcing
CN114971425B (zh) 数据库信息监控方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20860611

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 18/07/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20860611

Country of ref document: EP

Kind code of ref document: A1