CN110782221A - Intelligent interview evaluation system and method - Google Patents

Intelligent interview evaluation system and method Download PDF

Info

Publication number
CN110782221A
CN110782221A CN201910885901.6A CN201910885901A CN110782221A CN 110782221 A CN110782221 A CN 110782221A CN 201910885901 A CN201910885901 A CN 201910885901A CN 110782221 A CN110782221 A CN 110782221A
Authority
CN
China
Prior art keywords
module
audio
training
information
interview
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910885901.6A
Other languages
Chinese (zh)
Inventor
丁玥
杜城祥
徐沨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910885901.6A priority Critical patent/CN110782221A/en
Publication of CN110782221A publication Critical patent/CN110782221A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/105Human resources
    • G06Q10/1053Employment or hiring

Landscapes

  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides an interview intelligent evaluation system which is characterized by comprising an applicant information input module, a post information input module, an audio input module, a preprocessing module, an audio prediction module and a prediction result display module, wherein the applicant information input module is used for inputting interview information, the post information input module is used for inputting post information, the audio input module is used for preprocessing the interview information, and the audio prediction module is used for predicting the interview information: the applicant information input module is a man-machine interaction interface; the applicant information input module is connected with the preprocessing module; the post information input module is a man-machine interaction interface; the post information input module is connected with the preprocessing module; the audio input module is a man-machine interaction interface; the audio input module is connected with the preprocessing module; the preprocessing module is connected with the audio prediction module; the audio prediction module is connected with the prediction result display module. An intelligent interview evaluation method uses an intelligent interview evaluation system. The invention has the beneficial effects that: the method can adapt to scenes with more characteristic quantity on the basis of low cost, and has better model accuracy and generalization capability.

Description

Intelligent interview evaluation system and method
Technical Field
The invention relates to the field of interview evaluation, in particular to an interview intelligent evaluation system and method.
Background
The traditional job hunting interview, particularly the initial screening link of large-scale recruitment, brings a heavy workload for the human resource professionals of enterprises. The human resource specialist needs to perform a large amount of resume browsing and telephone interview with the applicant. In order to enable the recruitment interview link of an enterprise to be more intelligent, relieve the workload of human resource specialists and bring great convenience to job seekers, and enable talents not to be bound by space-time factors any more, the invention designs a set of artificial intelligent interview system. And automatically giving comprehensive evaluation to the applicant through the resume submitted by the applicant and interview question-answer interaction at the terminal by applying a machine learning algorithm and combining the requirements of a worker unit.
At present, an interview comprehensive evaluation algorithm based on artificial intelligence is very rare, and the main reason is that recruitment of most enterprises still remains in a traditional mode of manually browsing and screening resumes and then making a call to perform preliminary interview. The screening of the resume by the large job hunting website is realized by intelligent post matching, namely, the matching value of job hunters and the job application posts is calculated according to the information such as the attributes, labels and the like of personnel and companies, and the disadvantage of the mode is that: (1) the matching algorithm is not an intelligent algorithm based on machine learning, and a good result cannot be obtained. (2) The information of the user is completed by registering and perfecting personal data, and the presented dimension is not enough. (3) The current algorithms do not give multi-dimensional judgments over the entire primary process of job hunting.
In view of the above problems, in the inventions of application nos. CN201910191404.6 and CN201910148095.4, regression models are used to solve the above problems, but the regression models are based on linear combinations in nature, and when the number of features is large, the accuracy and generalization capability of the models are greatly affected.
Therefore, the market needs an intelligent interview evaluation system and method which can adapt to scenes with a large number of characteristics and have good model accuracy and generalization capability on the basis of low cost.
Disclosure of Invention
In order to solve the technical problems, the invention discloses an interview intelligent evaluation system and a method, and the technical scheme of the invention is implemented as follows:
an interview intelligent evaluation system comprises an applicant information input module, a post information input module, an audio input module, a preprocessing module, an audio prediction module and a prediction result display module, wherein the applicant information input module is used for inputting interview information, the post information input module is used for inputting post information, the audio input module is used for inputting audio, the preprocessing module is used for predicting the interview information, and the prediction result: the applicant information input module is a human-computer interaction interface; the applicant information input module is connected with the preprocessing module; the post information input module is a human-computer interaction interface; the post information input module is connected with the preprocessing module; the audio input module is a human-computer interaction interface; the audio input module is connected with the preprocessing module; the preprocessing module is connected with the audio prediction module; the audio prediction module is connected with the prediction result display module.
Preferably, the system further comprises a prediction model training module; the prediction model training module comprises a sample database, a voice recognition system, a Chinese word segmentation system and a prediction model training engine; the sample database stores original data; the voice recognition system is connected with the sample database; the Chinese word segmentation system is connected with the prediction model training engine; the prediction model training engine is connected with the audio prediction module.
An intelligent interview evaluating method comprises the following steps: s1: inputting applicant information by using the applicant information input module; s2: using the post information input module to input post information; s3: inputting interview audio information by using the audio input module; s4: inputting the applicant information, the post information and the interview audio information into the audio prediction module; s5: the audio prediction module outputs an evaluation result by using the prediction model and transmits the evaluation result to the prediction result display module; s6: and displaying the evaluation result by using the prediction result display module.
Preferably, it further comprises S0: training the predictive model; s0 includes: s0-1: extracting information of an applicant for training from the sample database, and performing feature coding on the information of the applicant for training to obtain feature codes of the applicant for training; s0-2: extracting training post information from the sample database, and performing the feature coding on the training post information to obtain a training post code; s0-3: extracting audio information for training from the sample database, and performing voice recognition on the audio information for training by using the voice recognition system to generate text information; s0-4: segmenting the text information by using the Chinese word segmentation system to generate a segmented text; s0-5: and performing model training on the word segmentation text, the training post feature codes and the training applicant feature codes by using the prediction model training engine to generate the prediction model.
Preferably, the method adopted by the feature coding is one of one hot-encoding and label-encoding.
Preferably, S0-3 includes: s0-3-1: converting the audio information for training into a WAV format to obtain WAV audio information; s0-3-2: VAD processing is carried out on the WAV audio information to obtain VAD audio data; s0-3-3: performing data framing on the VAD audio data to obtain audio framing data; s0-3-4: filtering the audio frame data by using a window function to obtain filtered voice data; s0-3-5: transforming the filtered voice data by using fast Fourier transform to obtain frequency spectrum data; s0-3-6: reading the frequency spectrum data, and training an acoustic model by using a convolutional neural network; s0-3-7: identifying the frequency spectrum data by using the acoustic model to obtain an acoustic model identification result; s0-3-8: performing CTC decoding on the acoustic model identification result to obtain a voice pinyin sequence; s0-3-9: and converting the voice pinyin sequence into text information by using a Chinese language model for the voice pinyin sequence.
Preferably, S0-5 includes: s0-5-1: using the text information for word embedding to generate a text vector; s0-5-2: performing Average Embedding on all the questions and all the answers of the text vector to obtain a question-answer word vector; s0-5-3: performing feature interaction on the question-answer word vectors to obtain a plurality of groups of feature matrixes and form a multi-channel feature tensor; s0-5-4: extracting high-order features of the multi-channel feature tensor by using a convolutional neural network to generate an output vector; s0-5-5: fusing the output vector, the characteristic code of the applicant for training and the characteristic code of the post for training to generate a scoring vector; s0-5-6: decomposing the scoring vector using a factorizer; s0-5-7: and calculating the error between the scoring result and the real scoring, adjusting the parameters of the convolutional neural network by using a BP algorithm updating formula according to the error, returning to S0-5-6 if the execution times of S0-5-6 does not reach the maximum iteration times, and outputting the convolutional neural network as the prediction model if the execution times of S0-5-6 reaches the maximum iteration times.
An intelligent interview evaluation device runs an intelligent interview evaluation system and executes an intelligent interview evaluation method, and comprises a memory and an arithmetic unit.
By implementing the technical scheme of the invention, the technical problem that better evaluation accuracy and generalization capability cannot be maintained in interview requirements with more characteristic quantity in the prior art can be solved; by implementing the technical scheme of the invention, the technical effects of adapting to the condition with more characteristics and keeping better evaluation accuracy and generalization capability under the condition with more characteristics can be realized on the premise of low cost.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only one embodiment of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a block diagram of one embodiment of an interview intelligence evaluation system;
FIG. 2 is an overall flow diagram of one particular embodiment of an interview intelligence evaluation method;
FIG. 3 is a flow chart of S0 of a preferred embodiment of a method for interview intelligence evaluation;
FIG. 4 is a flow chart of S0-3 of a preferred embodiment of a method for interview intelligence evaluation;
fig. 5 is a flow chart of S0-5 of a preferred embodiment of an interview intelligence evaluation method.
In the above drawings, the reference numerals denote:
1-applicant information input module;
2-post information input module;
3-an audio input module;
4-a pre-processing module;
5-an audio prediction module;
6-a prediction result display module;
7-a predictive model training module;
71-sample database; 72-a speech recognition system; 73-Chinese word segmentation system; 74-predictive model training Engine.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In a specific embodiment, as shown in fig. 1, an interview intelligent evaluation system includes an applicant information input module 1, a post information input module 2, an audio input module 3, a preprocessing module 4, an audio prediction module 5, and a prediction result display module 6: the applicant information input module 1 is a human-computer interaction interface; the applicant information input module 1 is connected with the preprocessing module 4; the post information input module 2 is a human-computer interaction interface; the post information input module 2 is connected with the preprocessing module 4; the audio input module 3 is a human-computer interaction interface; the audio input module 3 is connected with the preprocessing module 4; the preprocessing module 4 is connected with the audio prediction module 5; the audio prediction module 5 is connected with the prediction result display module 6.
In the specific embodiment, the user inputs the resume of the applicant through the applicant information input module 1, such as the academic history, the age, the work experience and the occupation qualification, the user inputs the post requirement of the post through the post information input module 2, such as the academic history, the age, the work experience and the occupation qualification, the user inputs the recording of the whole interview question-answering process through the audio input module 3, and the preprocessing module 4 converts the resume into the corresponding resume characteristic code and converts the post requirement into the corresponding post information characteristic code; the audio prediction module 5 outputs a corresponding prediction result according to the resume characteristic coding, the post information characteristic coding and the recording, then outputs the prediction result to the prediction result display module 6, and the prediction result is displayed to a user by the prediction result display module 6; the prediction result display module 6 can be E-chat, a leaf, or a Candela, among other visualization tools; through the interaction among the modules, the method can adapt to scenes with more characteristic quantity on the basis of low cost, and has better model accuracy and generalization capability.
In a preferred embodiment, as shown in fig. 1, a predictive model training module 7 is further included; the prediction model training module 7 comprises a sample database 71, a speech recognition system 72, a Chinese word segmentation system 73 and a prediction model training engine 74; the sample database 71 stores original data; the speech recognition system 72 is connected with the sample database 71; the Chinese word segmentation system 73 is connected with a prediction model training engine 74; the predictive model training engine 74 is coupled to the audio prediction module 5.
In such a preferred embodiment, the predictive model training module 7 is used to train the predictive model; the sample database 71 is used for storing training samples, and the content of the training samples comprises post requirements, the records of applicants and corresponding recordings; the speech recognition system 72 is configured to recognize the recorded content, convert the recorded content into a chinese text, perform word segmentation on the chinese text by using the chinese word segmentation system 73, generate a word segmentation result after the word segmentation is completed, transmit the word segmentation result to the prediction model training engine 74, perform vectorization on the word segmentation result by using the prediction model training engine 74, perform training, generate an audio prediction model, transmit the audio prediction model to the audio prediction module 5, and transmit the audio prediction model to the audio prediction module 5.
In a specific embodiment, as shown in fig. 1 and fig. 2, an interview intelligent evaluation method includes: s1: inputting the information of the applicant by using the information input module 1 of the applicant; s2: the post information input module 2 is used for inputting post information; s3: inputting interview audio information by using an audio input module 3; s4: inputting the applicant information, the post information and the fitting audio information into an audio prediction module 5; s5: the audio prediction module 5 outputs an evaluation result by using the prediction model and transmits the evaluation result to the prediction result display module 6; s6: the prediction result is displayed using the prediction result display module 6.
In this particular embodiment, the user inputs the resume of the applicant through the applicant information input module 1, such as study history, age, work experience and vocational qualification, the applicant information input module 1 converts the resume into a corresponding resume feature code, the user inputs the post requirement of the post through the post information input module 2, such as study history, age, work experience and occupation qualification, the position information input module 2 converts the position requirement into a corresponding position information characteristic code, the user inputs the record of the whole interview question-answering process through the audio input module 3, the audio prediction module 5 outputs a corresponding prediction result according to the resume characteristic code, the position information characteristic code and the record, and then the prediction result is output to the prediction result display module 6, the prediction result is displayed to the user by the prediction result display module 6, and the user judges whether the applicant meets the requirement of the post according to the content displayed by the prediction result display module 6.
In a preferred embodiment, as shown in fig. 1 and 3, further comprising S0: training a prediction model; s0 includes: s0-1: extracting the information of the applicant for training from the sample database 71, and performing feature coding on the information of the applicant for training to obtain the feature coding of the applicant for training; s0-2: extracting the training post information from the sample database 71, and performing feature coding on the training post information to obtain a training post code; s0-3: extracting audio information for training from the sample database 71, and performing speech recognition on the audio information for training by using a speech recognition system 72 to generate text information; s0-4: segmenting the text information by using a Chinese segmentation system 73 to generate segmented text; s0-5: the predictive model training engine 74 is used to perform model training on the segmented text, the training post feature codes, and the training applicant feature codes to generate a predictive model.
In the preferred embodiment, the information of the applicant for training and the information of the post for training are extracted from the sample database 71, the information of the applicant for training and the information of the post for training are subjected to characteristic coding, character information which cannot be identified by a computer is converted into digital information, corresponding audio information for training is extracted from the sample database 71, the audio information for training is subjected to voice recognition by using the voice recognition system 72 to generate corresponding text information, namely, the audio is converted into character information, then, the text is subjected to word segmentation by using the Chinese word segmentation system 73 to generate word segmentation text, a corresponding word segmentation result output value is output to the prediction model training engine 74, and the prediction model training engine 74 is trained by using a convolutional neural network training algorithm to obtain a corresponding prediction model.
In a preferred embodiment, as shown in fig. 1 and fig. 3, the method used for feature encoding is one of hot-encoding and label-encoding.
In the preferred embodiment, one hot-encoding is suitable for training and using models sensitive to numerical values, and label-encoding is suitable for training and using models insensitive to numerical values, and corresponding feature encoding modes can be selected according to actual application scenes.
In a preferred embodiment, as shown in fig. 1 and 4, S0-3 includes: s0-3-1: converting the audio information for training into a WAV format to obtain WAV audio information; s0-3-2: VAD processing is carried out on the WAV audio information to obtain VAD audio data; s0-3-3: performing data framing on VAD audio data to obtain audio framing data; s0-3-4: filtering the audio frame data by using a window function to obtain filtered audio data; s0-3-5: transforming the filtered voice data by using fast Fourier transform to obtain frequency spectrum data; s0-3-6: reading frequency spectrum data, and training an acoustic model by using a convolutional neural network; s0-3-7: identifying the frequency spectrum data by using an acoustic model to obtain an acoustic model identification result; s0-3-8: performing CTC decoding on the acoustic model identification result to obtain a voice pinyin sequence; s0-3-9: and converting the voice pinyin sequence into text information by using a Chinese language model for the voice pinyin sequence.
In the preferred embodiment, the audio information for training is converted into the WAV format to obtain the WAV audio information, and the WAV format is a lossless audio format and is favorable for ensuring the accuracy of model training; then, performing mute processing on the WAV audio information, eliminating a mute period in the WAV audio information, thereby reducing interference on model training due to the mute period, thereby obtaining VAD audio data, then performing data framing on the VAD audio data, thereby improving the processability of the VAD audio data, thereby obtaining audio frame data, then performing filtering by using a window function, then obtaining corresponding filtered voice data by filtering, then performing transformation on the filtered voice data by using fast Fourier transform, thereby obtaining spectrum data, then reading the spectrum data, training an acoustic model by using a convolutional neural network through the spectrum data, then testing the performance of the acoustic model, thereby obtaining an acoustic model test result, for example, the test result of the acoustic model does not reach a preset threshold value, and training the acoustic model again after adjusting the parameters of the convolutional neural network, until the test result of the acoustic model reaches a preset threshold value; after the acoustic model training is finished, the acoustic model is used for recognizing the frequency spectrum data to obtain an acoustic model recognition result, CTC decoding is carried out on the acoustic model recognition result to enable the voice to correspond to corresponding characters, continuous same symbols are combined into symbols, meanwhile, the mute segmentation marker is removed, so that a final voice pinyin sequence is obtained, a hidden Markov chain is used for constructing a state network, a path which is most matched with the voice is searched from the state network, and therefore the voice pinyin sequence is converted into text information.
In a preferred embodiment, as shown in fig. 1 and 5, S0-5 includes: s0-5-1: using text information for word embedding to generate a text vector; s0-5-2: performing Average Embedding on all questions and all answers of the text vector to obtain a question-answer word vector; s0-5-3: performing feature interaction on the question-answer word vectors to obtain a plurality of groups of feature matrixes and form a multi-channel feature tensor; s0-5-4: extracting high-order features of a multi-channel feature tensor by using a convolutional neural network to generate an output vector; s0-5-5: fusing the output vector, the characteristic code of the applicant for training and the characteristic code of the post for training to generate a scoring vector; s0-5-6: decomposing the scoring vector by using a factorization machine; s0-5-7: and calculating the error between the scoring result and the real scoring, updating the parameters of the convolutional neural network by using a BP algorithm according to the error, returning to S0-5-6 if the execution times of S0-5-6 do not reach the maximum iteration times, and outputting the convolutional neural network as a prediction model if the execution times of S0-5-6 reach the maximum iteration times.
In the preferred embodiment, before model training, vectorization is performed on data, word vectorization is performed on text information by using word2vec or fasttext to generate vectorized text information, then the vectorized text information is divided into a question part and an answer part, and the two parts of word vectors are subjected to averaging embedding respectively to generate a question-answer word vector, and meanwhile, the characteristic dimension is set to be 64; and then carrying out feature interaction on the question-answer word vectors, wherein the feature interaction adopts an outer product mode and can be formalized as follows:
Figure BDA0002207284810000081
thus obtaining a plurality of groups of feature matrixes, generally 5 groups of feature matrixes, and forming a multi-channel feature tensor, wherein the shape of the multi-channel feature tensor is 5 multiplied by 64; then constructing a 6-layer 3D convolutional neural network, extracting high-order features in the multi-channel feature tensor by using the 6-layer 3D convolutional neural network, generating an output vector with the length of 4096 in an output layer, fusing the input vector with the feature code of the applicant for training and the post code for training to generate a final scoring vector, and decomposing the scoring vector by using a factorization machine, wherein the model of the factorization machine is formally expressed as
Figure BDA0002207284810000082
Then, controlling the range of an output result of a factorization result obtained by a factorization machine to be 0-1 by using a sigmoid function, and then multiplying the output result by 100 to obtain a corresponding score prediction; and calculating the error between the scoring result and the real scoring, updating the parameters of the 6-layer 3D convolutional neural network by using a BP algorithm according to the error, returning to S0-5-6 if the execution times of S0-5-6 does not reach the maximum iteration times, and outputting the corresponding 6-layer 3D convolutional neural network if the execution times of S0-5-6 reaches the maximum iteration times.
In a specific embodiment, the interview intelligent evaluation equipment runs an interview intelligent evaluation system and executes an interview intelligent evaluation method, and comprises a memory and an arithmetic unit.
In this embodiment, the memory and the operator provide the storage and operation services, and the memory and the operator may be a virtual memory and an operator provided by a virtual service provider, or may be a physical memory and an operator, and the corresponding memory and operator may be selected according to the actual cost accounting.
It should be understood that the above-described embodiments are merely exemplary of the present invention, and are not intended to limit the present invention, and that any modification, equivalent replacement, or improvement made without departing from the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (8)

1. An interview intelligent evaluation system is characterized by comprising an applicant information input module, a post information input module, an audio input module, a preprocessing module, an audio prediction module and a prediction result display module;
the applicant information input module is a human-computer interaction interface; the applicant information input module is connected with the preprocessing module;
the post information input module is a human-computer interaction interface; the post information input module is connected with the preprocessing module;
the audio input module is a human-computer interaction interface; the audio input module is connected with the preprocessing module;
the preprocessing module is connected with the audio prediction module;
the audio prediction module is connected with the prediction result display module.
2. The interview intelligence evaluation system of claim 1, further comprising a predictive model training module;
the prediction model training module comprises a sample database, a voice recognition system, a Chinese word segmentation system and a prediction model training engine;
the sample database stores original data; the voice recognition system is connected with the sample database; the Chinese word segmentation system is connected with the prediction model training engine; the prediction model training engine is connected with the audio prediction module.
3. An interview intelligent evaluation method using the interview intelligent evaluation system according to claim 1 or claim 2, comprising:
s1: inputting applicant information by using the applicant information input module;
s2: using the post information input module to input post information;
s3: inputting interview audio information by using the audio input module;
s4: inputting the applicant information, the post information and the interview audio information into the audio prediction module;
s5: the audio prediction module outputs an evaluation result by using the prediction model and transmits the evaluation result to the prediction result display module;
s6: and displaying the evaluation result by using the prediction result display module.
4. The interview intelligent evaluation method according to claim 3, further comprising S0: training the predictive model;
s0 includes:
s0-1: extracting information of an applicant for training from the sample database, and performing feature coding on the information of the applicant for training to obtain feature codes of the applicant for training;
s0-2: extracting training post information from the sample database, and performing the feature coding on the training post information to obtain a training post code;
s0-3: extracting audio information for training from the sample database, and performing voice recognition on the audio information for training by using the voice recognition system to generate text information;
s0-4: segmenting the text information by using the Chinese word segmentation system to generate a segmented text;
s0-5: and performing model training on the word segmentation text, the training post feature codes and the training applicant feature codes by using the prediction model training engine to generate the prediction model.
5. The interview intelligence evaluation method according to claim 4, wherein the feature coding is one of one hot-coding and label-coding.
6. The interview intelligent evaluation method according to claim 4, wherein S0-3 comprises:
s0-3-1: converting the audio information for training into a WAV format to obtain WAV audio information;
s0-3-2: performing VAD processing on the WAV audio information to obtain VAD audio data;
s0-3-3: performing data framing on the VAD audio data to obtain audio framing data;
s0-3-4: filtering the audio frame data by using a window function to obtain filtered voice data;
s0-3-5: transforming the filtered voice data by using fast Fourier transform to obtain frequency spectrum data;
s0-3-6: reading the frequency spectrum data, and training an acoustic model by using a convolutional neural network;
s0-3-7: identifying the frequency spectrum data by using the acoustic model to obtain an acoustic model identification result;
s0-3-8: performing CTC decoding on the acoustic model identification result to obtain a voice pinyin sequence;
s0-3-9: and converting the voice pinyin sequence into text information by using a Chinese language model for the voice pinyin sequence.
7. The interview intelligent evaluation method according to claim 4, wherein S0-5 comprises:
s0-5-1: using the text information for word embedding to generate a text vector;
s0-5-2: performing Average Embedding on all the questions and all the answers of the text vector to obtain a question-answer word vector;
s0-5-3: performing feature interaction on the question-answer word vectors to obtain a plurality of groups of feature matrixes and form a multi-channel feature tensor;
s0-5-4: extracting high-order features of the multi-channel feature tensor by using a convolutional neural network to generate an output vector;
s0-5-5: fusing the output vector, the characteristic code of the applicant for training and the characteristic code of the post for training to generate a scoring vector;
s0-5-6: decomposing the scoring vector by using a factor decomposition machine to obtain a scoring result;
s0-5-7: and calculating the error between the scoring result and the real scoring, adjusting the parameters of the convolutional neural network by using a BP algorithm updating formula according to the error, returning to S0-5-6 if the execution times of S0-5-6 does not reach the maximum iteration times, and outputting the convolutional neural network as the prediction model if the execution times of S0-5-6 reaches the maximum iteration times.
8. An intelligent interview evaluating apparatus operating the intelligent interview evaluating system according to claim 1 or claim 2 and executing the intelligent interview evaluating method according to any one of claims 3 to 7, wherein the intelligent interview evaluating apparatus comprises a memory and an arithmetic unit.
CN201910885901.6A 2019-09-19 2019-09-19 Intelligent interview evaluation system and method Pending CN110782221A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910885901.6A CN110782221A (en) 2019-09-19 2019-09-19 Intelligent interview evaluation system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910885901.6A CN110782221A (en) 2019-09-19 2019-09-19 Intelligent interview evaluation system and method

Publications (1)

Publication Number Publication Date
CN110782221A true CN110782221A (en) 2020-02-11

Family

ID=69383804

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910885901.6A Pending CN110782221A (en) 2019-09-19 2019-09-19 Intelligent interview evaluation system and method

Country Status (1)

Country Link
CN (1) CN110782221A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340444A (en) * 2020-02-17 2020-06-26 斯智信息科技(上海)有限公司 Interactive scenarized interviewing method, system, device and medium
CN113535820A (en) * 2021-07-20 2021-10-22 贵州电网有限责任公司 Electrical operating personnel attribute presumption method based on convolutional neural network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760965A (en) * 2016-03-15 2016-07-13 北京百度网讯科技有限公司 Pre-estimated model parameter training method, service quality pre-estimation method and corresponding devices
CN109544104A (en) * 2018-11-01 2019-03-29 平安科技(深圳)有限公司 A kind of recruitment data processing method and device
CN109670023A (en) * 2018-12-14 2019-04-23 平安城市建设科技(深圳)有限公司 Man-machine automatic top method for testing, device, equipment and storage medium
CN110135800A (en) * 2019-04-23 2019-08-16 南京葡萄诚信息科技有限公司 A kind of artificial intelligence video interview method and system
CN110211591A (en) * 2019-06-24 2019-09-06 卓尔智联(武汉)研究院有限公司 Interview data analysing method, computer installation and medium based on emotional semantic classification

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760965A (en) * 2016-03-15 2016-07-13 北京百度网讯科技有限公司 Pre-estimated model parameter training method, service quality pre-estimation method and corresponding devices
CN109544104A (en) * 2018-11-01 2019-03-29 平安科技(深圳)有限公司 A kind of recruitment data processing method and device
CN109670023A (en) * 2018-12-14 2019-04-23 平安城市建设科技(深圳)有限公司 Man-machine automatic top method for testing, device, equipment and storage medium
CN110135800A (en) * 2019-04-23 2019-08-16 南京葡萄诚信息科技有限公司 A kind of artificial intelligence video interview method and system
CN110211591A (en) * 2019-06-24 2019-09-06 卓尔智联(武汉)研究院有限公司 Interview data analysing method, computer installation and medium based on emotional semantic classification

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340444A (en) * 2020-02-17 2020-06-26 斯智信息科技(上海)有限公司 Interactive scenarized interviewing method, system, device and medium
CN111340444B (en) * 2020-02-17 2021-09-03 斯智信息科技(上海)有限公司 Interactive scenarized interviewing method, system, device and medium
CN113535820A (en) * 2021-07-20 2021-10-22 贵州电网有限责任公司 Electrical operating personnel attribute presumption method based on convolutional neural network

Similar Documents

Publication Publication Date Title
CN112346567B (en) Virtual interaction model generation method and device based on AI (Artificial Intelligence) and computer equipment
CN113094578B (en) Deep learning-based content recommendation method, device, equipment and storage medium
CN111191000B (en) Dialogue management method, device and system of intelligent voice robot
CN111177186B (en) Single sentence intention recognition method, device and system based on question retrieval
CN114722839B (en) Man-machine cooperative dialogue interaction system and method
CN115858758A (en) Intelligent customer service knowledge graph system with multiple unstructured data identification
CN111177351A (en) Method, device and system for acquiring natural language expression intention based on rule
CN116010581A (en) Knowledge graph question-answering method and system based on power grid hidden trouble shooting scene
CN110782221A (en) Intelligent interview evaluation system and method
CN115269836A (en) Intention identification method and device
CN113486174B (en) Model training, reading understanding method and device, electronic equipment and storage medium
CN114398466A (en) Complaint analysis method and device based on semantic recognition, computer equipment and medium
CN112100360B (en) Dialogue response method, device and system based on vector retrieval
CN114792117A (en) Training method and device of session classification model and session classification method and device
CN115934891A (en) Question understanding method and device
CN115357711A (en) Aspect level emotion analysis method and device, electronic equipment and storage medium
CN112015921B (en) Natural language processing method based on learning auxiliary knowledge graph
CN114638229A (en) Entity identification method, device, medium and equipment of record data
CN114064873A (en) Method and device for building FAQ knowledge base in insurance field and electronic equipment
CN110232328A (en) A kind of reference report analytic method, device and computer readable storage medium
WO2023173541A1 (en) Text-based emotion recognition method and apparatus, device, and storage medium
CN116561540B (en) Service data correction method and device and training method and device for AI digital person
CN115269803A (en) Intelligent question-answering method, device, equipment and medium based on multi-language model
CN117690436A (en) Intelligent outbound method, device, equipment and storage medium based on emotion recognition
CN116992018A (en) Data processing method, apparatus, device, readable storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination