CN111222837A - Intelligent interviewing method, system, equipment and computer storage medium - Google Patents

Intelligent interviewing method, system, equipment and computer storage medium Download PDF

Info

Publication number
CN111222837A
CN111222837A CN201910968962.9A CN201910968962A CN111222837A CN 111222837 A CN111222837 A CN 111222837A CN 201910968962 A CN201910968962 A CN 201910968962A CN 111222837 A CN111222837 A CN 111222837A
Authority
CN
China
Prior art keywords
candidate
information
text information
resume
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910968962.9A
Other languages
Chinese (zh)
Inventor
刘志龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN201910968962.9A priority Critical patent/CN111222837A/en
Publication of CN111222837A publication Critical patent/CN111222837A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/105Human resources
    • G06Q10/1053Employment or hiring

Landscapes

  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention provides an intelligent interview method, which comprises the following steps: acquiring resume information of a candidate, verifying the authenticity of the resume information through an application programming interface of a preset website and recording a verification result; receiving voice data of the candidate person returned by an interview site, identifying text information corresponding to the voice data, carrying out emotion marking on the text information, and analyzing the marked text information to obtain a quality model of the candidate person; receiving test question answers uploaded by the candidate, checking the correctness of the test question answers, obtaining answer scores, and recording professional grades corresponding to the scores; and performing weighted operation on the recorded checking results corresponding to the candidate to obtain the final evaluation result of the candidate. The invention saves the time cost of manual interview and improves the evaluation accuracy of the candidate.

Description

Intelligent interviewing method, system, equipment and computer storage medium
Technical Field
The embodiment of the invention relates to the field of human-computer interaction, in particular to an intelligent interview method, an intelligent interview system, computer equipment and a computer-readable storage medium.
Background
Recruitment is an important link of human resource management, talents suitable for corresponding posts of a company are screened out from a large number of candidates every time a recruiter of a human unit is in a peak period, and enterprises need to bear large time cost and labor cost. At present, an enterprise can eliminate a part of people to select by setting rough screening conditions such as academic expertise, working experience and the like through software, so that corresponding time cost is reduced, however, the characters of candidates, the pressure resistance required by talking and a part of posts cannot be truly embodied on a resume, manual interview is still required to confirm the expression communication capacity and the pressure resistance of the candidates, and the labor cost is still high.
Disclosure of Invention
In view of this, an object of the embodiments of the present invention is to provide an intelligent interviewing method, system, computer device and computer-readable storage medium, which can more accurately evaluate the matching degree of interviewing candidates with respect to the positions to be interviewed, so as to replace the manual interviewing link and save labor cost.
In order to achieve the above object, an embodiment of the present invention provides an intelligent interview method, including the following steps:
acquiring candidate resume information uploaded by an interviewer terminal, acquiring information on a preset website through an application programming interface of the preset website to verify the authenticity of the resume information and recording a resume information verification result;
receiving voice data of the candidate person returned by the interview site terminal, identifying text information corresponding to the voice data, carrying out emotion marking on the text information, and analyzing the marked text information to obtain a quality model of the candidate person;
receiving test question answers uploaded by the candidate terminal, checking the correctness of the test question answers, obtaining answer scores, and recording professional grades corresponding to the scores;
and performing weighted operation according to the resume information verification result, the quality model and the professional grade to obtain a final evaluation result of the candidate, and sending the evaluation result to the interviewer terminal.
Preferably, the steps of acquiring candidate resume information uploaded by the interviewer terminal, acquiring information on a preset website through an application programming interface of the preset website to verify the authenticity of the resume information, and recording the resume information verification result include:
acquiring candidate resume information uploaded by the interviewer terminal;
calling real historical data corresponding to the candidate in a preset website database through an application programming interface of a preset website;
and extracting specified items in the candidate resume information, verifying the correctness of the specified items by taking the real resume data as a reference, and storing the verification result of each item.
Preferably, the step of recognizing text information corresponding to the voice data and performing emotion labeling on the text information includes:
recognizing the voice data and generating corresponding text information;
performing word segmentation processing on the generated text information, and calculating the emotion score of each word segmentation;
and taking a single sentence as a unit, carrying out statistical calculation on the emotion scores of the word segments in the sentence to obtain the emotion score of each sentence and endowing the emotion score of each sentence with corresponding labels.
Preferably, the step of recognizing text information corresponding to the voice data and performing emotion labeling on the text information further includes:
recognizing the voice data and generating corresponding text information;
identifying an emotional tendency field in the text information, and performing frequency calculation on the emotional tendency field;
and giving weights to the emotional tendency fields and the frequency values thereof by referring to a preset expectation analysis library, calculating to generate a final emotional value, and giving corresponding emotional labels to the text information according to the final emotional value.
Preferably, the step of emotion labeling the text information includes:
and searching a relation table between the prestored emotion scores and the emotion labels to obtain the emotion labels corresponding to the sentences, and adding storage address pointers of the corresponding emotion labels to the head or tail of the data of each sentence for storage.
Preferably, the step of analyzing the labeled text information to obtain the quality model of the candidate includes:
searching a prime model set from a preset prime model library according to an interview problem corresponding to the text information;
and calculating the matching degree of the marked text information and each model in the set, and selecting the item with the highest matching degree as the prime model of the candidate.
Preferably, the performing a weighted operation according to the resume information verification result, the quality model, and the professional grade to obtain a final evaluation result of the candidate, and sending the evaluation result to the interviewer terminal includes:
giving weight values to the resume verification result, the prime model information and the professional grade information and calculating a final evaluation score, wherein the resume verification weight is greater than the prime model information weight, and the prime model information weight is greater than the professional grade information weight;
and judging whether the final evaluation score is larger than a preset threshold value or not, and if so, defining the candidate as being capable of being recorded.
In order to achieve the above object, an embodiment of the present invention further provides an intelligent interview system, including:
the resume verification module is used for acquiring candidate resume information uploaded by the interviewer terminal, acquiring information on a preset website through an application programming interface of the preset website to verify the authenticity of the resume information and recording a resume information verification result;
the quality model screening module is used for receiving the voice data of the candidate person returned by the interview site terminal, identifying text information corresponding to the voice data, carrying out emotion marking on the text information, and analyzing the marked text information to obtain a quality model of the candidate person;
the answer rating module is used for receiving the test question answers uploaded by the candidate terminal, checking the correctness of the test question answers, obtaining answer scores and recording professional grades corresponding to the scores;
and the weighted evaluation module is used for carrying out weighted operation according to the resume information verification result, the quality model and the professional grade to obtain a final evaluation result of the candidate and sending the evaluation result to the interviewer terminal.
In order to achieve the above object, an embodiment of the present invention further provides a computer device, where the computer device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the computer program is executed by the processor, the computer device implements the steps of the intelligent interview method as described above.
To achieve the above object, an embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, where the computer program is executable by at least one processor, so as to cause the at least one processor to execute the steps of the intelligent interview method.
In addition, according to the intelligent interviewing method, the intelligent interviewing system, the computer equipment and the computer readable storage medium, emotion analysis and candidate quality model selection steps are configured for the question and answer step, so that the evaluation result of the candidate is matched with the corresponding application post, for example, the work pressure resistance and the like, the evaluation result is more accurate, the time cost of manual interviewing is saved, and the selection accuracy of the candidate is improved.
Drawings
FIG. 1 is a flow chart of the steps corresponding to the embodiment of the intelligent interviewing method of the invention;
FIG. 2 is a schematic flowchart of step S100 in the first embodiment of the intelligent interview method according to the present invention;
FIG. 3 is a schematic flowchart of step S200 according to a first embodiment of the intelligent interview method;
fig. 4 is a schematic flow chart of another embodiment of the step S200 in the first embodiment of the intelligent interview method of the invention;
FIG. 5 is a flowchart illustrating a step S200 according to an embodiment of the intelligent interviewing method;
FIG. 6 is a flowchart illustrating a step S400 of the intelligent interview method according to the present invention;
FIG. 7 is a schematic diagram of a second program module of the intelligent interview system according to the embodiment of the invention;
fig. 8 is a schematic diagram of a hardware structure of a third embodiment of the computer apparatus according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
It should be understood that although the terms first, second, etc. may be used to describe the designated key in embodiments of the present invention, the designated key should not be limited to these terms. These terms are only used to distinguish specified keywords from each other. For example, the first specified keyword may also be referred to as the second specified keyword, and similarly, the second specified keyword may also be referred to as the first specified keyword, without departing from the scope of embodiments of the present invention.
The word "if" as used herein may be interpreted as referring to "at … …" or "when … …" or "corresponding to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if determined" or "if detected (a stated condition or time)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
Example one
With reference to figure 1 of the drawings,
step S100, candidate resume information uploaded by an interviewer terminal is obtained, information on a preset website is obtained through an application programming interface of the preset website to verify the authenticity of the resume information, and resume information verification results are recorded.
In the recruitment process, the phenomenon of resume counterfeiting and the like is inevitable, so that the verification of information such as a academic calendar in the resume of the candidate is necessary.
For example, the current candidate for interviewing is zhang san, which uploads resume information with a academic record of this division of Beijing university. And extracting the name 'Zusanli' in the Zusanli resume information as a retrieval element, calling an API (application program interface) query interface of a student information network, acquiring the real student resume information corresponding to Zusanli from the student information network database, and verifying the real student resume information with the resumes uploaded by the candidate.
Step S200, receiving the voice data of the candidate person returned by the interview site terminal, identifying text information corresponding to the voice data, carrying out emotion marking on the text information, and analyzing the marked text information to obtain a prime model of the candidate person.
Based on the idea of replacing manual interview, the link of communication with the candidate is realized by intelligent interaction. An audio collector such as a microphone and an audio player such as a loudspeaker are arranged on an interview site, a processing unit sends preset basic questions to the interview site, the candidates are informed in a mode of displaying characters on a screen or playing the questions through the loudspeaker, when the candidates answer the questions, the microphone feeds back collected voice data of the candidates to the processing unit through a transmission line, and the processing unit restores the voice data of analog signals and identifies voice text information in the voice data.
After the voice text information is identified, performing emotion analysis on the voice text information and performing emotion marking. Emotion analysis, also known as emotion recognition in the computer technology field, is a process of analyzing, processing, generalizing, and reasoning subjective text with emotion colors. The invention provides three emotion recognition modes which are specially needed for an intelligent interview system, and the emotion recognition modes are explained in the following paragraphs.
After the text information is subjected to emotion analysis, emotion marks are added to the text information according to the analysis result, and the emotion marks are added to the text information by data such as specific numbers, characters and expression pictures so as to assist other processing units or modules in identifying emotion content contained in the text information. Emotion annotations are typically only a few bytes. The basic unit of emotion labeling may be a word, a sentence, a paragraph, or even the entire text, which is not limited by the present invention.
Emotion markup is used as a selected reference for a candidate personality model, which is a set of data sets reflecting non-physical attributes of candidate mental traits, emotional tendencies, and the like, illustratively,
step 300, receiving the test question answers uploaded by the candidate terminal, checking the correctness of the test question answers, obtaining answer scores, and recording the professional grades corresponding to the scores.
The professional grade is also an indispensable link for each personnel to the personnel, and the best method for checking the professional grade is test question test.
In the step of answering the test questions, the candidate answers the preset test paper through computer equipment, the answer is uploaded after the answer is finished, the processing unit obtains the uploaded answer of the test questions, the answer data of the candidate is verified according to preset verification data, and the score of the answer data is calculated.
The method includes the steps that corresponding professional grade ratings can be set for test question scores, a rating strategy is preset exemplarily, a 90-100 segmentation rating is A, a 70-90 segmentation rating is B, and a 40-70 segmentation rating is C, whether the strategy is applied or not is determined according to actual demand scenes, the strategy concept is provided only for meeting more demand scenes, and limitation is not carried out.
In addition, the sequence of step 300, step 200 and step 100 may be disordered, and the present invention does not limit the sequence of these three steps.
Step S400, carrying out weighted operation according to the resume information verification result, the quality model and the professional grade to obtain a final evaluation result of the candidate, and sending the evaluation result to the interviewer terminal.
Specifically, the verification result may include the resume verification result, the candidate quality model selection, the professional ranking result in the previous step, and may further include other additional results, such as: whether the candidate meets expectations or not is judged according to salary requirements of the candidate, whether the candidate is stable or not is judged according to the reason of the job leaving and the like, and a technician can add verification steps and verification parameters according to a demand scene.
And giving a weight value to each verification result to perform weighting operation to generate a final evaluation result of the candidate.
The invention replaces the traditional manual interview link by utilizing the design of computer equipment and intelligent interaction, and in addition, the invention is provided with the steps of sentiment analysis and candidate quality model selection aiming at the question-answer link, so that the evaluation result of the candidate is more matched with the corresponding applied post, for example, the working pressure resistance and the like, the evaluation result is more accurate, the time cost of manual interview is saved, and the selection accuracy of the candidate is improved.
Optionally, referring to fig. 2, the step S100 of obtaining candidate resume information uploaded by the interviewer terminal, obtaining information on a preset website through an application programming interface of the preset website to verify the authenticity of the resume information, and recording a resume information verification result includes:
step S110 obtains candidate resume information uploaded by the interviewer terminal.
The method for acquiring the resume information of the candidate can be used for actively delivering the candidate or pulling data from a related recruitment website, but generally, the resume uploaded by the candidate in the interview is more comprehensive than the resume of the candidate in the recruitment website, so that the method for acquiring the resume information of the candidate can also be used for scanning the paper resume submitted by the candidate in the interview to generate a PDF (portable document format) file, and the processing unit identifies each text field in the PDF file to perform the subsequent resume verification step.
Step S120, calling real historical data corresponding to the candidate in a preset website database through an application programming interface of a preset website;
verifying the authenticity of the candidate resume requires real data which can be compared as a reference, for example, official certified database websites such as the academic messenger, wherein an application programming interface is also called an API interface in computer technology, specifically, the name of the candidate is used as a query condition, and the academic messenger database interface is called to acquire academic information corresponding to the name of the candidate.
Step S130 extracts a designated entry in the candidate resume information, verifies the correctness of the designated entry based on the real resume data, and stores the verification result of each entry.
Illustratively, the candidate resume refers to the candidate ' Beijing university ' which is the subject of the university of Beijing, the field of the Beijing university ' is extracted, each text field in the candidate resume information pulled by the student network is traversed, and the authenticity of the text fields is further verified, and a simple code is provided for facilitating understanding:
if (value! ═ Beijing university')
System.out.println (candidate resume true)
Alternatively, and with reference to figure 3,
step S200, receiving voice data of the candidate person returned by an interviewing site, identifying text information corresponding to the voice data, carrying out emotion marking on the text information, and analyzing the marked text information to obtain a quality model of the candidate person, wherein the step S comprises the following steps:
step S210, recognizing the voice data and generating corresponding text information;
the voice data collected by the microphone and other audio is in an analog model mode, the processing unit converts the voice data into a digital signal which can be recognized by a computer, or converts the voice data into a digital signal which can be recognized by the computer through other modules such as a signal processing module, and after the conversion is finished, the processing unit reads text information in the digital signal and loads the text information into a cache region so as to carry out subsequent emotion marking processing.
Step S220, performing word segmentation processing on the generated text information, and calculating the emotion score of each word segmentation;
specifically, the minimum granularity of the emotion analysis object is a word, but the basic unit expressing one emotion is a sentence, although the word can be used for familiarizing the basic information of the emotion, a single word lacks the object and lacks the degree of association, and the emotion degrees obtained by combining different words are different and even the emotion tendencies are opposite, so the emotion analysis granularity taking the sentence as the most basic is reasonable and has high accuracy.
Illustratively, the word segmentation process is as follows:
i/am/go/company/welfare/may also/,/but/overtime/too much/.
Aiming at each word segmentation, a preset anticipation database is preset by a technician, wherein the weight of each emotional word is defined in the database and is related to the degree of the emotional word.
Specifically, "can also be" a positive word, "too many" are negative words, and the emotional degree level of "too many" is very high, which reflects the extreme complaint emotion of the candidate to overtime, wherein it implies that the working pressure resistance of the candidate is not high, if the overtime frequency of the job application position is very high, the recording of the candidate is easy to cause an unstable problem, and the candidate is easy to leave the job after entering the job.
In the sentence, the positive word is recorded as positive number, the negative word is recorded as negative number, and in the sentence, "may be" +1 "in the database, and" too many "are" -2 ".
Step S230 is to take a single sentence as a unit, perform statistical calculation on the emotion scores of the participles in the sentence to obtain the emotion score of each sentence, and assign corresponding labels to the emotion scores.
Continuing with the above, calculating the emotion score of a single sentence by counting the emotion scores of the participles, wherein the emotion score of the single sentence is "I/M/company/welfare/Fuli/,/but/overtime/too much/" only two emotion words "can still", "too much", "can also" score as "+ 1", "too much" is "-2", the emotion score of the sentence is 1-2-1, and the emotion content corresponding to the sentence is positive.
And after the emotional information is analyzed, setting the emotional information as a label of the text information, and storing the emotional information and the sentence resume mapping relation.
Alternatively, and with reference to figure 4,
step S200, receiving voice data of the candidate person returned by the interviewing site, identifying text information corresponding to the voice data, carrying out emotion marking on the text information, and analyzing the marked text information to obtain a quality model of the candidate person, further comprising the following steps:
step S240 identifies the voice data, and generates text information corresponding to the voice data.
Step S250, identifying the emotional tendency field in the text information, and performing frequency calculation on the emotional tendency field.
The manner of frequency calculation for the emotion tendency field is a second embodiment of the present invention for the precise emotion recognition of text information, and when further explaining the embodiment, first, think that one would see how a sentence thinks.
Such as: he looks very tired, yesterday plus one shift. For overtime and tiredness, only specific dialog objects indicate that the weather of the current day is discussed, and the real effect is 'tiredness', so that the evaluation that the sentence is negative can be obtained, and the 'tiredness' can be defined as an emotional tendency field in the text information
And step S260, giving weights to the emotional tendency fields and the frequency values thereof by referring to a preset expectation analysis library, calculating to generate a final emotional value, and giving corresponding emotional labels to the text information according to the final emotional value.
According to the thought, the frequency of the emotional tendency field in the single sentence or the whole text information is calculated, and the emotional score of the emotional tendency field is calculated according to the frequency value and the preset weight value corresponding to the frequency value. And counting the emotion scores of all the emotional tendency fields to calculate a value, and further obtaining a final emotion value.
Optionally, the step of performing emotion annotation on the text information in step 220 includes:
and searching a relation table between the prestored emotion scores and the emotion labels to obtain the emotion labels corresponding to the sentences, and adding storage address pointers of the corresponding emotion labels to the head or tail of the data of each sentence for storage. .
Since the emoticons can represent the emotion information, the emotion information can be matched to obtain corresponding emoticons, and illustratively, a plurality of emoticons and emotion information corresponding to each emoticon are stored in a preset database, so that the emoticons corresponding to the emotion scores calculated in the previous steps can be matched in the database during matching. For example, the phrase "more work before work shifts", a negative emotion is recognized, and an emoticon for representing the negative emotion is mapped to the phrase and stored as a tag of the phrase.
In another embodiment, a label may also be represented by a symbolic number, as follows:
example documentation:
< none > this time we chose to live in a five-star hotel.
< + S > is good
'N' is lunch and chow, and no matter how many people go, no food is added in the lunch.
Figure BDA0002231441490000111
Figure BDA0002231441490000121
TABLE 1
As shown in Table 1, the tags are used to label the emotional tendency of a sentence. The position of the label is at the beginning of the sentence. The label is represented as described above.
Specifically, a relation table between the prestored emotion scores and emotion labels is searched to obtain emotion labels corresponding to the sentences, and storage address pointers of the corresponding emotion labels are added to the head or tail of data of the sentences to be stored together.
Selecting < none > this time we have selected a five-star hotel as an example, analyzing to obtain that the emotion of 'we have selected a five-star hotel' this time is none, the storage address of none is 0010, then adding 0010 to the head or tail of the sentence data 'we have selected a five-star hotel' in the form that '0010 we have selected a five-star hotel' or 'we have selected a five-star hotel 0010', certainly 'we have selected a five-star hotel' in the form of byte data composed of 0 and 1, for convenience of explanation, characters are adopted for substitution, and since the number of bytes of the pointer is less than that of the representation data of the actual emotion, the occupation of the storage space can be reduced by adding storage by using the pointer.
Optionally, referring to fig. 5, the step of analyzing the labeled text information to obtain the quality model of the candidate in step S200 includes:
step S270, searching a prime model set from a preset prime model library according to an interview question corresponding to the text information;
step S280, the labeled text information and each model in the set are subjected to matching degree calculation, and the item with the highest matching degree is selected as the prime model of the candidate.
Specifically, for the labeled text information, a matched prime model can be found through a pre-constructed prime model library, and a prime model which corresponds to the candidate and can reflect the prime of the candidate can be obtained. The parameters characterized by the quality model comprise the personality, the talking and telling ability, the communication ability, the pressure resistance and the like of the human.
Alternatively, referring to fig. 6, step S400 includes:
s410, giving weight values to the resume verification result, the prime model information and the professional grade information and calculating a final evaluation score, wherein the resume verification weight is greater than the prime model information weight, and the prime model information weight is greater than the professional grade information weight;
s420, judging whether the final evaluation score is larger than a preset threshold value or not, and if so, defining the candidate as being capable of being recorded.
For example, the preset weighting algorithm is that the prime model score of a candidate is 10%, the professional score is 80%, and the rest resume elements (e.g., academic records and the like) are 10%, if 30 scores are given to three prime models, 100 scores are given to professional scores, and 50 scores are given to rest resume elements, the evaluation result of three is 30 x 10% +100 x 80+50 x 10 ═ 88, the interviewer can perform preferential recording through the evaluation result of each candidate given by the query processing unit, and in another embodiment, the evaluation result can be presented in a report form, for example, the candidate prime models are presented in a hexagon model, and a visibility report is formed by matching the professional score specific scores and the rest resume elements. The duty cycle of the weighting algorithm, as well as the algorithm itself, may be adjusted by the developer.
In addition, in another embodiment, the present invention provides a third logic idea of emotion recognition as a supplement, including:
one method is to define emotion information by using emotion of a candidate when speaking, namely extracting acoustic features in voice data, and analyzing and identifying emotion information corresponding to the voice data, wherein parameters corresponding to the acoustic features can be extracted by referring to LPCC linear prediction cepstrum coefficients, MFCC parameters, formant parameters, fundamental frequency parameters based on prosodic features, characteristic parameters in energy aspect, speaking duration and amplitude parameters.
The second is to define emotional information by using the semantics of the words spoken by the candidate, such as the words without strong emotions: the work and the multiple work are connected into a block to form a preset comparison template, and when the processing unit detects that the candidate speaks the sentence and the tone is normal, the candidate can be identified to express the negative emotion. In other embodiments, the processing unit may further extract facial feature points of the candidate through a camera in the conference room, analyze and detect details of the candidate, such as expressions, laryngeal movement, and the like, and complete emotion information recognition.
Example two
Referring to fig. 7, a schematic diagram of program modules of a second embodiment of the intelligent interview system of the invention is shown. In this embodiment, the intelligent interview-based system 20 can include or be divided into one or more program modules, which are stored in a storage medium and executed by one or more processors to implement the present invention and implement the intelligent interview method described above. The program modules referred to in the embodiments of the present invention refer to a series of computer program instruction segments capable of performing specific functions, and are more suitable than the programs themselves for describing the execution process in the storage medium based on the intelligent interview system 20. The following description will specifically describe the functions of the program modules of the present embodiment:
the resume verification module 200 is used for acquiring candidate resume information uploaded by the interviewer terminal, acquiring information on a preset website through an application programming interface of the preset website to verify the authenticity of the resume information and recording a resume information verification result;
in an exemplary embodiment, the resume verification module 200 is further configured to obtain resume information submitted by the candidate;
calling real historical data corresponding to the candidate in a preset website database through an application programming interface of a preset website;
and extracting specified items in the candidate resume information, verifying the correctness of the specified items by taking the real resume data as a reference, and storing the verification result of each item.
The quality model screening module 210 is configured to receive the voice data of the candidate person returned by the interview site terminal, identify text information corresponding to the voice data, perform emotion labeling on the text information, and analyze the labeled text information to obtain a quality model of the candidate person;
in an exemplary embodiment, the prime model filtering module 210 is further configured to recognize the voice data and generate corresponding text information;
performing word segmentation processing on the generated text information, and calculating the emotion score of each word segmentation;
and taking a single sentence as a unit, carrying out statistical calculation on the emotion scores of the word segments in the sentence to obtain the emotion score of each sentence and endowing the emotion score of each sentence with corresponding labels.
In an exemplary embodiment, the prime model filtering module 210 is further configured to recognize the voice data and generate corresponding text information;
identifying an emotional tendency field in the text information, and performing frequency calculation on the emotional tendency field;
and giving weights to the emotional tendency fields and the frequency values thereof by referring to a preset expectation analysis library, calculating to generate a final emotional value, and giving corresponding emotional labels to the text information according to the final emotional value.
Optionally, the quality model filtering module 210 includes emoticon labels, arabic numeral labels, and letter labels according to the definition form of the emotion annotation.
In an exemplary embodiment, the prime model screening module 210 is further configured to search a prime model set from a preset prime model library according to an interview question corresponding to the text information;
and calculating the matching degree of the marked text information and each model in the set, and selecting the item with the highest matching degree as the prime model of the candidate.
The answer rating module 220 is configured to receive the test question answers uploaded by the candidate terminal, check correctness of the test question answers, obtain answer scores, and record professional grades corresponding to the scores;
and the weighted evaluation module 230 is configured to perform weighted operation according to the resume information verification result, the quality model, and the professional grade to obtain a final evaluation result of the candidate, and send the evaluation result to the interviewer terminal.
In an exemplary embodiment, the weighted evaluation module 230 is further configured to assign a weight value to the resume verification result, the prime model information, and the professional grade information, and calculate a final evaluation score, wherein the resume verification weight is greater than the prime model information weight, and the prime model information weight is greater than the professional grade information weight;
and judging whether the final evaluation score is larger than a preset threshold value or not, and if so, defining the candidate as being capable of being recorded.
EXAMPLE III
Fig. 8 is a schematic diagram of a hardware architecture of a computer device according to a third embodiment of the present invention. In the present embodiment, the computer device 2 is a device capable of automatically performing numerical calculation and/or information processing in accordance with a preset or stored instruction. The computer device 2 may be a personal computer, a tablet computer, a mobile phone, or the like, or may be a cloud device for providing a virtual client, such as a rack server, a blade server, a tower server, or a rack server (including an independent server or a server cluster composed of a plurality of servers). As shown, the computer device 2 includes, but is not limited to, at least a memory 21, a processor 22, a network interface 23, and an intelligent interview-based system 20 communicatively coupled to each other via a system bus. Wherein:
in this embodiment, the memory 21 includes at least one type of computer-readable storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the storage 21 may be an internal storage unit of the computer device 2, such as a hard disk or a memory of the computer device 2. In other embodiments, the memory 21 may also be an external storage device of the computer device 2, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like provided on the computer device 2. Of course, the memory 21 may also comprise both internal and external memory units of the computer device 2. In this embodiment, the memory 21 is generally used for storing an operating system installed in the computer device 2 and various application software, such as a program code of the intelligent interview method in the first embodiment. Further, the memory 21 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 22 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 22 is typically used to control the overall operation of the computer device 2. In this embodiment, the processor 22 is configured to execute the program codes stored in the memory 21 or process data, such as the intelligent system 20, to implement the intelligent interview method according to the first embodiment.
The network interface 23 may comprise a wireless network interface or a wired network interface, and the network interface 23 is generally used for establishing communication connection between the computer device 2 and other electronic apparatuses. For example, the network interface 23 is used to connect the computer device 2 to an external terminal through a network, establish a data transmission channel and a communication connection between the computer device 2 and the external terminal, and the like. The network may be a wireless or wired network such as an Intranet (Intranet), the Internet (Internet), a Global System of Mobile communication (GSM), Wideband Code Division Multiple Access (WCDMA), a 4G network, a 5G network, Bluetooth (Bluetooth), Wi-Fi, and the like.
It is noted that fig. 8 only shows the computer device 2 with components 20-23, but it is to be understood that not all shown components are required to be implemented, and that more or less components may be implemented instead.
In this embodiment, the intelligent interview system 20 stored in the memory 21 can be further divided into one or more program modules, and the one or more program modules are stored in the memory 21 and executed by one or more processors (in this embodiment, the processor 22) to complete the present invention.
For example, the figure shows a schematic diagram of program modules implementing the fourth embodiment of the intelligent interview system 20, in which the intelligent interview system 20 can be divided into a resume verification module 200, a quality model filtering module 210, an answer rating module 220, and a weighted evaluation module 230. The program modules referred to in the present invention refer to a series of computer program instruction segments capable of performing specific functions, and are more suitable than programs for describing the execution process of the intelligent system 20 in the computer device 2. The specific functions of the program modules 200-230 have been described in detail in the second embodiment, and are not described herein again.
Example four
The present embodiment also provides a computer-readable storage medium, such as a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application mall, etc., on which a computer program is stored, which when executed by a processor implements corresponding functions. The computer-readable storage medium of the embodiment is used for storing the intelligent interview system 20, and when being executed by the processor, the intelligent interview-based method of the first embodiment is implemented.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An intelligent interview method, comprising:
acquiring candidate resume information uploaded by an interviewer terminal, acquiring information on a preset website through an application programming interface of the preset website to verify the authenticity of the resume information and recording a resume information verification result;
receiving voice data of the candidate person returned by the interview site terminal, identifying text information corresponding to the voice data, carrying out emotion marking on the text information, and analyzing the marked text information to obtain a quality model of the candidate person;
receiving test question answers uploaded by the candidate terminal, checking the correctness of the test question answers, obtaining answer scores, and recording professional grades corresponding to the scores;
and performing weighted operation according to the resume information verification result, the quality model and the professional grade to obtain a final evaluation result of the candidate, and sending the evaluation result to the interviewer terminal.
2. The intelligent interview method according to claim 1, wherein the steps of obtaining candidate resume information uploaded by the interviewer terminal, obtaining information on a preset website through an application programming interface of the preset website to verify the authenticity of the resume information, and recording the resume information verification result include:
acquiring candidate resume information uploaded by the interviewer terminal;
calling real historical data corresponding to the candidate in a preset website database through an application programming interface of a preset website;
and extracting specified items in the candidate resume information, verifying the correctness of the specified items by taking the real resume data as a reference, and storing the verification result of each item.
3. The intelligent interview method according to claim 1, wherein the step of recognizing text information corresponding to the voice data and emotion labeling the text information comprises:
recognizing the voice data and generating corresponding text information;
performing word segmentation processing on the generated text information, and calculating the emotion score of each word segmentation;
and taking a single sentence as a unit, carrying out statistical calculation on the emotion scores of the word segments in the sentence to obtain the emotion score of each sentence and endowing the emotion score of each sentence with corresponding labels.
4. The intelligent interview method according to claim 1, wherein the step of recognizing text information corresponding to the speech data and emotion labeling the text information further comprises:
recognizing the voice data and generating corresponding text information;
identifying an emotional tendency field in the text information, and performing frequency calculation on the emotional tendency field;
and giving weights to the emotional tendency fields and the frequency values thereof by referring to a preset expectation analysis library, calculating to generate a final emotional value, and giving corresponding emotional labels to the text information according to the final emotional value.
5. The intelligent interview method of claim 3 wherein said step of emotionally tagging the text message comprises:
and searching a relation table between the prestored emotion scores and the emotion labels to obtain the emotion labels corresponding to the sentences, and adding storage address pointers of the corresponding emotion labels to the head or tail of the data of each sentence for storage.
6. The intelligent interview method of claim 1 wherein the step of analyzing the labeled text message to obtain a fitness model of the candidate comprises:
searching a prime model set from a preset prime model library according to an interview problem corresponding to the text information;
and calculating the matching degree of the marked text information and each model in the set, and selecting the item with the highest matching degree as the prime model of the candidate.
7. The intelligent interviewing method according to claim 1, wherein the step of performing a weighted operation according to the resume information verification result, the quality model and the professional grade to obtain a final evaluation result of the candidate and sending the evaluation result to the interviewer terminal comprises:
giving weight values to the resume verification result, the prime model information and the professional grade information and calculating a final evaluation score, wherein the resume verification weight is greater than the prime model information weight, and the prime model information weight is greater than the professional grade information weight;
and judging whether the final evaluation score is larger than a preset threshold value or not, and if so, defining the candidate as being capable of being recorded.
8. An intelligent interview system, comprising:
the resume verification module is used for acquiring candidate resume information uploaded by the interviewer terminal, acquiring information on a preset website through an application programming interface of the preset website to verify the authenticity of the resume information and recording a resume information verification result;
the quality model screening module is used for receiving the voice data of the candidate person returned by the interview site terminal, identifying text information corresponding to the voice data, carrying out emotion marking on the text information, and analyzing the marked text information to obtain a quality model of the candidate person;
the answer rating module is used for receiving the test question answers uploaded by the candidate terminal, checking the correctness of the test question answers, obtaining answer scores and recording professional grades corresponding to the scores;
and the weighted evaluation module is used for carrying out weighted operation according to the resume information verification result, the quality model and the professional grade to obtain a final evaluation result of the candidate and sending the evaluation result to the interviewer terminal.
9. A computer arrangement comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the computer program, when executed by the processor, carries out the steps of the intelligent interview method according to any one of claims 1 to 7.
10. A computer-readable storage medium, having stored therein a computer program executable by at least one memory to cause the at least one processor to perform the steps of the intelligent interview method of any one of claims 1-7.
CN201910968962.9A 2019-10-12 2019-10-12 Intelligent interviewing method, system, equipment and computer storage medium Pending CN111222837A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910968962.9A CN111222837A (en) 2019-10-12 2019-10-12 Intelligent interviewing method, system, equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910968962.9A CN111222837A (en) 2019-10-12 2019-10-12 Intelligent interviewing method, system, equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN111222837A true CN111222837A (en) 2020-06-02

Family

ID=70828954

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910968962.9A Pending CN111222837A (en) 2019-10-12 2019-10-12 Intelligent interviewing method, system, equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN111222837A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111833010A (en) * 2020-06-12 2020-10-27 北京网聘咨询有限公司 Intelligent interviewing method, system, equipment and storage medium
CN112786054A (en) * 2021-02-25 2021-05-11 深圳壹账通智能科技有限公司 Intelligent interview evaluation method, device and equipment based on voice and storage medium
CN114418366A (en) * 2022-01-06 2022-04-29 北京博瑞彤芸科技股份有限公司 Data processing method and device for intelligent cloud interview
CN117114475A (en) * 2023-08-21 2023-11-24 广州红海云计算股份有限公司 Comprehensive capability assessment system based on multidimensional talent assessment strategy

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909339A (en) * 2017-11-01 2018-04-13 平安科技(深圳)有限公司 Job candidates verify grading approach, application server and computer-readable recording medium
CN109325124A (en) * 2018-09-30 2019-02-12 武汉斗鱼网络科技有限公司 A kind of sensibility classification method, device, server and storage medium
CN109960725A (en) * 2019-01-17 2019-07-02 平安科技(深圳)有限公司 Text classification processing method, device and computer equipment based on emotion
CN110162599A (en) * 2019-04-15 2019-08-23 深圳壹账通智能科技有限公司 Personnel recruitment and interview method, apparatus and computer readable storage medium
CN110211591A (en) * 2019-06-24 2019-09-06 卓尔智联(武汉)研究院有限公司 Interview data analysing method, computer installation and medium based on emotional semantic classification

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909339A (en) * 2017-11-01 2018-04-13 平安科技(深圳)有限公司 Job candidates verify grading approach, application server and computer-readable recording medium
CN109325124A (en) * 2018-09-30 2019-02-12 武汉斗鱼网络科技有限公司 A kind of sensibility classification method, device, server and storage medium
CN109960725A (en) * 2019-01-17 2019-07-02 平安科技(深圳)有限公司 Text classification processing method, device and computer equipment based on emotion
CN110162599A (en) * 2019-04-15 2019-08-23 深圳壹账通智能科技有限公司 Personnel recruitment and interview method, apparatus and computer readable storage medium
CN110211591A (en) * 2019-06-24 2019-09-06 卓尔智联(武汉)研究院有限公司 Interview data analysing method, computer installation and medium based on emotional semantic classification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
昵称败给了备注: ""根据某某数量加权,是什么意思?"", pages 1, Retrieved from the Internet <URL:https://www.zhihu.com/question/24656722> *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111833010A (en) * 2020-06-12 2020-10-27 北京网聘咨询有限公司 Intelligent interviewing method, system, equipment and storage medium
CN112786054A (en) * 2021-02-25 2021-05-11 深圳壹账通智能科技有限公司 Intelligent interview evaluation method, device and equipment based on voice and storage medium
CN114418366A (en) * 2022-01-06 2022-04-29 北京博瑞彤芸科技股份有限公司 Data processing method and device for intelligent cloud interview
CN114418366B (en) * 2022-01-06 2022-08-26 北京博瑞彤芸科技股份有限公司 Data processing method and device for intelligent cloud interview
CN117114475A (en) * 2023-08-21 2023-11-24 广州红海云计算股份有限公司 Comprehensive capability assessment system based on multidimensional talent assessment strategy

Similar Documents

Publication Publication Date Title
CN112346567B (en) Virtual interaction model generation method and device based on AI (Artificial Intelligence) and computer equipment
CN109767787B (en) Emotion recognition method, device and readable storage medium
CN111222837A (en) Intelligent interviewing method, system, equipment and computer storage medium
CN107256428B (en) Data processing method, data processing device, storage equipment and network equipment
CN110597952A (en) Information processing method, server, and computer storage medium
CN110874716A (en) Interview evaluation method and device, electronic equipment and storage medium
CN109360550A (en) Test method, device, equipment and the storage medium of voice interactive system
WO2021218028A1 (en) Artificial intelligence-based interview content refining method, apparatus and device, and medium
WO2021056837A1 (en) Customization platform and method for service quality evaluation product
CN108268450B (en) Method and apparatus for generating information
KR102476099B1 (en) METHOD AND APPARATUS FOR GENERATING READING DOCUMENT Of MINUTES
CN112235470B (en) Incoming call client follow-up method, device and equipment based on voice recognition
CN111190946A (en) Report generation method and device, computer equipment and storage medium
CN113807103A (en) Recruitment method, device, equipment and storage medium based on artificial intelligence
CN107844531B (en) Answer output method and device and computer equipment
CN112395887A (en) Dialogue response method, dialogue response device, computer equipment and storage medium
CN115641101A (en) Intelligent recruitment method, device and computer readable medium
CN109408175B (en) Real-time interaction method and system in general high-performance deep learning calculation engine
KR102280490B1 (en) Training data construction method for automatically generating training data for artificial intelligence model for counseling intention classification
KR101440887B1 (en) Method and apparatus of recognizing business card using image and voice information
CN113609833B (en) Dynamic file generation method and device, computer equipment and storage medium
CN114528851B (en) Reply sentence determination method, reply sentence determination device, electronic equipment and storage medium
CN116127011A (en) Intention recognition method, device, electronic equipment and storage medium
CN114254088A (en) Method for constructing automatic response model and automatic response method
CN114141235A (en) Voice corpus generation method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination