CN115345591A - Intelligent interviewing method, intelligent interviewing device and intelligent interviewing system - Google Patents

Intelligent interviewing method, intelligent interviewing device and intelligent interviewing system Download PDF

Info

Publication number
CN115345591A
CN115345591A CN202210999967.XA CN202210999967A CN115345591A CN 115345591 A CN115345591 A CN 115345591A CN 202210999967 A CN202210999967 A CN 202210999967A CN 115345591 A CN115345591 A CN 115345591A
Authority
CN
China
Prior art keywords
question
interview
score
basic
topic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210999967.XA
Other languages
Chinese (zh)
Inventor
刘丹
黄豪洲
段勇
石行
李雪莲
韦运波
范彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202210999967.XA priority Critical patent/CN115345591A/en
Publication of CN115345591A publication Critical patent/CN115345591A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/105Human resources
    • G06Q10/1053Employment or hiring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Abstract

The embodiment of the specification provides an intelligent interview method, an intelligent interview device and an intelligent interview system. When conducting interview, after an interview object is obtained for the answering content of the current interview and the question score of the current interview is determined, the next interview question is determined from an interview paper, a historical interview record or an interview question library according to the question score and the question depth and the question breadth of the current interview question, and the determined next interview question is provided for the interview object to conduct question answering. In response to determining that the next interview question no longer exists, determining an interview score for the interview subject based on the question scores for each of the base questions in the interview paper.

Description

Intelligent interviewing method, intelligent interviewing device and intelligent interviewing system
Technical Field
The embodiment of the specification generally relates to the field of artificial intelligence, in particular to an intelligent interview method, an intelligent interview device and an intelligent interview system.
Background
With the development of enterprise scale and the aggravation of industry competition, the demand of the enterprise for labor and the mobility of employees are rapidly increased, so that the recruitment work of the enterprise employees becomes a key factor for the development of the enterprise. Traditional employee recruitment relies on manual interviewing. However, the manual interview has the problems of long interview time period, high interviewer cost and difficult integration of interview standards, so that the large-scale recruitment requirement cannot be met.
Disclosure of Invention
The embodiment of the specification provides an intelligent interview method, an intelligent interview device and an intelligent interview system. By using the intelligent interview method, the intelligent interview device and the intelligent interview system, large-scale recruitment can be realized, and the real ability of an interview object can be accurately reflected.
According to an aspect of embodiments herein, there is provided an intelligent interview method, comprising: obtaining the answering content of the interview object aiming at the current interview question; determining the question score of the current test question based on the acquired answering content; determining a next interview question from an interview test paper, a historical interview record or an interview question library according to the question score and the question depth and the question width of the current interview question, wherein the interview test paper comprises at least two basic questions selected from the interview question library; and providing the determined next interview question to the interview object for question answering.
Optionally, in an example of the above aspect, determining a next interview question from an interview paper, a historical interview question record, or an interview question library according to the question score and the question depth or the question breadth of the current interview question may include: in response to the title score is greater than corresponding score threshold value and the problem depth of current front test title reaches the problem guide depth of the basic title of the problem chain that current front test title is located, will next basic title in the test paper is determined to be next front test title, in response to the title score is greater than corresponding score threshold value and current front test title purpose problem depth does not reach the problem guide depth of the basic title of the problem chain that current front test title is located, follows select in the historical face test record an associated problem of current front test title, as next front test title, in response to the title score is not greater than corresponding score threshold value and current front test title purpose problem breadth reaches the basic title problem breadth of the face test paper that current front test title corresponds to, will next basic title in the test paper is determined to be next front test title, in response to the title is not greater than corresponding score threshold value and current front test title problem breadth does not reach the basic question breadth of the face test paper that current front test title corresponds to, will next basic question determination is determined to be next front test title as next front test title, in response to the title score is not greater than corresponding score threshold value and current front test title problem breadth reach the basic question of the basic test paper of the basic question that the same type of the current front test title, the basic question from the current question of the basic question of the current test paper, the basic question of the current question has the same type of the current question.
Optionally, in one example of the above aspect, each base topic includes a heuristic guide tag for question depth expansion association and has a question chain throughput threshold. In response to the question score being greater than the corresponding score threshold and the problem depth of the current test question not reaching the problem guide depth of the basic question of the problem chain in which the current test question is located, selecting an associated problem of the current test question from the historical interview records as a next test question, which may include: responding to the fact that the number of the interview questions with the topic scores larger than the corresponding score threshold value in the problem chain where the current interview questions are located is smaller than the problem chain throughput threshold value, selecting one associated problem pointed by the elicitation guide label from the historical interview records as a next interview question, responding to the fact that the number of the interview questions with the topic scores larger than the corresponding score threshold value in the problem chain where the current interview questions are located is not smaller than the problem chain throughput threshold value, and recommending one sub-problem of the problem chain where the current interview questions are located from the historical interview records as the next interview question.
Optionally, in an example of the above aspect, in response to that the number of interview questions in the question chain with the topic score larger than the corresponding score threshold in the current interview question is not smaller than the question chain throughput threshold, recommending, from the historical interview record, a sub-question of the question chain with the current interview question as a next interview question may include: and in response to that the number of the interviewing questions with the topic scores larger than the corresponding score threshold value in the question chain of the current interviewing question is not smaller than the throughput threshold value of the question chain, recommending a subproblem of the question chain of the current interviewing question as a next interviewing question from the historical interviewing record according to a user collaborative filtering mode.
Optionally, in one example of the above aspect, each topic includes a type extension tag for question breadth extension association. In response to the question score not being greater than the corresponding score threshold and the question extent of the current test question not reaching the question extent of the basic questions of the interview test paper corresponding to the current interview question, selecting a basic question of the same type from the interview question library as the next interview question may include: and responding to the problem score is not larger than the corresponding score threshold value and the problem breadth of the current test question does not reach the problem breadth of the basic questions of the interview test paper corresponding to the current interview question, and selecting one basic question of the same type pointed by the type extension tag from the interview question library as the next interview question.
Optionally, in one example of the above aspect, each topic has positive keywords and negative keywords. Determining the topic score of the current test topic based on the obtained answering content may include: determining an answer matching degree score between the acquired answer content and the reference answer; determining a positive keyword coverage score and a negative keyword coverage score of the acquired answering content; and determining the question score of the current test question according to the determined answer matching degree score, the positive keyword coverage score and the negative keyword coverage score.
Optionally, in an example of the above aspect, the interview method is applied to customer service interviews, and the answer content of the interview object for the current interview subject includes audio/video content captured via an audio/video capture device. The intelligent interviewing method can further comprise the following steps: determining a speech emotion score of the answering content by performing speech emotion analysis on the audio content; and determining the facial expression score of the answering content by performing facial expression analysis on the video content. Correspondingly, determining the title score of the current test question according to the determined answer matching degree score, the positive keyword coverage score and the negative keyword coverage score may include: and determining the question score of the current test question according to the determined answer matching degree score, the positive keyword coverage score, the negative keyword coverage score, the speech emotion score and the facial expression score.
Optionally, in an example of the above aspect, the intelligent interview method may further include: in response to determining that the next interview question is no longer present, determining an interview score for the interview subject based on the question scores for each of the base questions in the interview paper.
Optionally, in one example of the above aspect, the interview paper is arranged into at least two structural modules, each structural module having a scoring weight. Determining the interview score of the interview subject according to the topic score of each basic topic in the interview test paper may comprise: and determining the interview score of the interview object according to the question score of each basic question in the interview paper and the score weight of the structure module where each basic question is located.
Optionally, in one example of the above aspect, the structural module includes a base expression module, a personality search module, a job recognition module, and a competency search module.
Optionally, in one example of the above aspect, for a base topic with a problem depth extension and/or a problem breadth extension, the topic score of the base topic may be determined by: determining the response scores of all question chains in the question chains of the basic questions, expanding at least one sub-basic question through question breadth expansion of the basic questions, and performing question depth expansion on the basic question and each sub-basic question to obtain each question chain; and determining the question score of the basic question according to the response scores of all the question chains.
According to another aspect of embodiments herein, there is provided an intelligent interview apparatus, comprising: the answer content acquisition unit is used for acquiring the answer content of the interview object aiming at the current interview; a question score determining unit for determining a question score of the current test question based on the acquired answering content; an interview question determining unit for determining the next interview question from an interview test paper, a historical interview record or an interview question library according to the question score and the problem depth and the problem width of the current interview question, wherein the interview test paper comprises at least two basic questions selected from the interview question library; and an interview question providing unit for providing the determined next interview question to the interview object for answering the question.
Optionally, in an example of the above aspect, in response to that the topic score is greater than a corresponding score threshold and the problem depth of the current test topic reaches a problem guide depth of a basic topic of a problem chain in which the current test topic is located, the interview topic determination unit determines a next basic topic in the interview test paper as a next interview topic. And responding to the question score larger than the corresponding score threshold value and the problem depth of the current test question does not reach the problem guide depth of the basic question of the problem chain where the current test question is located, and the test question determining unit selects one associated problem of the current test question from the historical test question record to serve as the next test question. And responding to the problem score is not larger than the corresponding score threshold value and the problem breadth of the current test question reaches the problem breadth of the basic question of the interview test paper corresponding to the current interview question, and the interview question determining unit determines the next basic question in the interview test paper as the next interview question. And responding to the problem score not larger than the corresponding score threshold value and the problem breadth of the current test question does not reach the problem breadth of the basic question of the test paper corresponding to the current test question, wherein the test question determining unit selects a basic question which has the same type as the basic question to which the current test question belongs from the test question library as the next test question.
Optionally, in one example of the above aspect, each base topic includes a heuristic guide tag for question depth expansion association and has a question chain throughput threshold. And in response to the fact that the number of the interview questions with the topic scores larger than the corresponding score threshold value in the question chain of the current interview question is smaller than the throughput threshold value of the question chain, the interview question determining unit selects one associated question pointed by the heuristic guide tag from the historical interview records as the next interview question. And in response to that the number of the interview questions with the topic scores larger than the corresponding score threshold value in the question chain of the current interview question is not smaller than the throughput threshold value of the question chain, the interview question determining unit recommends a subproblem of the question chain of the current interview question from the historical interview record as a next interview question.
Optionally, in an example of the above aspect, in response to that the number of interview questions with topic scores greater than the corresponding score threshold in the question chain of the current interview question is not less than the throughput threshold of the question chain, the interview question determining unit recommends a sub-question of the question chain of the current interview question as a next interview question from the historical interview record in a user collaborative filtering manner.
Optionally, in one example of the above aspect, each topic includes a type extension tag for question breadth extension association. And responding to the situation that the question score is not larger than the corresponding score threshold value and the problem breadth of the current test question does not reach the problem breadth of the basic questions of the interview test paper corresponding to the current interview question, and selecting a basic question of the same type pointed by the type extension tag from the interview question library by the interview question determining unit to serve as the next interview question.
Optionally, in one example of the above aspect, each topic has positive keywords and negative keywords. The title score determining unit includes: the answer matching degree scoring module is used for determining the score of the answer matching degree between the acquired answer content and the reference answer; the keyword coverage scoring module is used for determining the positive keyword coverage score and the negative keyword negative score of the acquired answering content; and the title score determining module is used for determining the title score of the current test question according to the determined answer matching degree score, the positive keyword coverage score and the negative keyword coverage score.
Optionally, in an example of the above aspect, the interviewing method is applied to customer service interviewing, and the answer content acquiring unit acquires the answer content of the interviewing subject for the current interview subject by acquiring audio/video content of the interviewing subject. The title score determining unit may further include: the voice emotion scoring module is used for determining the voice emotion score of the answering content by performing voice emotion analysis on the audio content; and the facial expression score module is used for determining the facial expression score of the answering content by carrying out facial expression analysis on the video content. And the question score determining module determines the question score of the current test question according to the determined answer matching degree score, the positive keyword coverage score, the negative keyword coverage score, the voice emotion score and the facial expression score.
Optionally, in an example of the above aspect, the intelligent interview apparatus may further include: and the interview score determining unit is used for determining the interview score of the interview object according to the question score of each basic question in the interview paper in response to the fact that the next interview question does not exist any more.
Optionally, in one example of the above aspect, the interview paper is arranged into at least two structural modules, each structural module having a scoring weight. And the interview score determining unit determines the interview score of the interview object according to the question score of each basic question in the interview paper and the score weight of the structure module where each basic question is located.
Optionally, in an example of the above aspect, for a basic topic having a problem depth extension and/or a problem breadth extension, the interview score determining unit determines the topic score of the basic topic by: determining the response scores of all question chains in the question chains of the basic questions, expanding at least one sub-basic question through question breadth expansion of the basic questions, and performing question depth expansion on the basic question and each sub-basic question to obtain each question chain; and determining the question score of the basic question according to the response scores of all the question chains.
According to another aspect of embodiments herein, there is provided an intelligent interview system, comprising: interview customer service end equipment; the interview server equipment comprises the intelligent interview device; and the data storage equipment is used for storing the interview question library.
According to another aspect of embodiments herein, there is provided an intelligent interview apparatus comprising: at least one processor, a memory coupled to the at least one processor, and a computer program stored in the memory, the at least one processor executing the computer program to implement the intelligent interview method as described above.
According to another aspect of embodiments herein, there is provided a computer-readable storage medium storing executable instructions that, when executed, cause a processor to perform the intelligent interview method recited in the claims.
According to another aspect of embodiments of the present specification, there is provided a computer program product comprising a computer program for execution by a processor to implement the intelligent interview method as described above.
Drawings
A further understanding of the nature and advantages of the present disclosure may be realized by reference to the following drawings. In the drawings, similar components or features may have the same reference numerals.
Fig. 1 illustrates an example architectural diagram of an intelligent interview system according to embodiments of the present description.
FIG. 2 illustrates an example flow diagram of a method of intelligent interviewing according to embodiments of the present description.
FIG. 3 illustrates an example structural diagram of an interview title according to embodiments of the present description.
FIG. 4 illustrates an example flow diagram of an algorithm-based automated mining interview topic production process in accordance with an embodiment of the present description.
FIG. 5 illustrates an example schematic of a structured layout of interview papers according to an embodiment of the present description.
Fig. 6 illustrates an example flow diagram of an interview flow advancing method according to embodiments of the present description.
FIG. 7 illustrates an example flow diagram of a topic score determination process in accordance with embodiments of the present description.
FIG. 8 illustrates an example flow diagram of a topic score determination process in accordance with embodiments of the present description.
FIG. 9 illustrates an example flow diagram of an interview title determination process according to embodiments of the specification.
FIG. 10 illustrates an example schematic of a topic chain of basic topics in an interview test paper according to an embodiment of the specification.
Fig. 11 illustrates a block diagram of an intelligent interview apparatus, according to embodiments of the present description.
Fig. 12 is a block diagram illustrating an implementation example of a title score determination unit according to an embodiment of the present specification.
FIG. 13 shows a schematic diagram of a computer-based implementation of an intelligent interview apparatus, according to an embodiment of the present description.
Detailed Description
The subject matter described herein will now be discussed with reference to example embodiments. It should be understood that these embodiments are discussed only to enable those skilled in the art to better understand and thereby implement the subject matter described herein, and are not intended to limit the scope, applicability, or examples set forth in the claims. Changes may be made in the function and arrangement of elements discussed without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as needed. For example, the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. In addition, features described with respect to some examples may also be combined in other examples.
As used herein, the term "include" and its variants mean open-ended terms in the sense of "including, but not limited to. The term "based on" means "based at least in part on". The terms "one embodiment" and "an embodiment" mean "at least one embodiment". The term "another embodiment" means "at least one other embodiment". The terms "first," "second," and the like may refer to different or the same object. Other definitions, whether explicit or implicit, may be included below. Unless the context clearly dictates otherwise, the definition of a term is consistent throughout the specification.
With the development of enterprise scale and the increase of industrial competition, the labor demand of enterprises and the mobility of employees are rapidly increased, so that the recruitment of employees of the enterprises becomes a key factor for the development of the enterprises. Traditional employee recruitment relies on manual interviewing. However, the manual interview has the problems of long interview time period, high interviewer cost and difficult integration of interview standards, so that the large-scale recruitment requirement cannot be met.
Therefore, an intelligent interview scheme designed for a general application scene is provided. In the intelligent interview scheme, interview is performed by means of pre-arranged interview test papers, and video acquisition is performed during interview to sequentially advance an interview process according to the content of the test papers. According to the intelligent interview scheme, once the test paper is determined, the interview subject content is fixed, so that the investigation range of the interview object is limited, and the skill level and the post competence of the interview object are difficult to comprehensively investigate.
In view of the foregoing, embodiments of the present specification provide an intelligent interview program. In the intelligent interview scheme, interview test papers consisting of basic questions are arranged in advance. When interviewing is carried out, aiming at each basic question, when it is determined that problem depth expansion or problem breadth expansion needs to be carried out on the basic question based on answer content of the basic question, the relevant interview question is selected from a historical interview record or an interview question library to ask the interview object for answering instead of asking the interview object for the next basic question in an interview paper, so that the content of the interview question is more flexible, the interview object can be comprehensively examined from the aspect of problem depth and problem breadth around the basic question in the interview paper, and the interview result can reflect the skill level and position competence of the interview object.
An intelligent interview method, an intelligent interview apparatus, and an intelligent interview system according to embodiments of the present specification will be described below with reference to the accompanying drawings.
FIG. 1 illustrates an example architectural diagram of an intelligent interview system 100 according to embodiments of the present specification.
In fig. 1, network 110 is employed to interconnect interview client device 120 and interview server device 130.
Network 110 may be any type of network capable of interconnecting network entities. The network 110 may be a single network or a combination of various networks. In terms of coverage, the network 110 may be a Local Area Network (LAN), a Wide Area Network (WAN), or the like. In terms of a carrier medium, the network 110 may be a wired network, a wireless network, or the like. In terms of data switching technology, the network 110 may be a circuit switched network, a packet switched network, or the like.
Interview client device 120 can be any type of electronic computing device capable of connecting to network 110, accessing a server or website on network 110, processing data or signals, and the like. For example, the interview client device 120 may be a desktop computer, a laptop computer, a tablet computer, a smart phone, and the like. Although only one interview client device is shown in fig. 1, it should be understood that a different number of interview client devices may be connected to network 110.
In one implementation, interview client device 120 may be used by an interview object. Interview client 122 may be installed on interview client device 120. In some cases, interview client 122 may interact with interview server device 130. For example, the interview client 122 can transmit the response content of the interview object to the interview server device 130 via the network 110 and receive and display interview questions for the interview object from the interview server device 130. The interview server 130 can be connected to a data storage device that stores an interview question library 140, or the interview server 130 can store the interview question library 140.
However, it should be understood that in some embodiments, the interview system 100 may not include the interview client device 120 and the network 110. In this case, the interview object may complete the interview process using an interview application installed on the interview server device 130.
It should be understood that all of the network entities shown in fig. 1 are exemplary, and that network 110 may refer to any other network entity, depending on the particular application needs. In some embodiments, the network 110 may be any one or more of a wired network or a wireless network. Examples of network 110 may include, but are not limited to, a cable network, a fiber optic network, a telecommunications network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a bluetooth network, a zigbee network (zigbee), near Field Communication (NFC), an intra-device bus, an intra-device line, and the like, or any combination thereof.
FIG. 2 illustrates an example flow diagram of a method 200 of intelligent interviewing according to embodiments of the present description. It is noted that the intelligent interview process illustrated in fig. 2 is performed by the interview server device 130.
As shown in fig. 2, at 210, when an interview is to be conducted on an interview subject, an interview paper is generated. The interview test paper consists of at least two basic questions selected from an interview question library. In some embodiments, the interview test paper can be generated by selecting a suitable set of interview questions from the interview question library as the basic questions of the interview test paper based on the specific interview requirements of the interview scene.
The interview questions in the interview question library can be generated in advance. In some embodiments, interview subjects can be written manually, for example, by an interviewer instructor with interview experience. Manually written interview topics can include, for example, but are not limited to post general topics. In addition, interview topics can also be written based on algorithmic automatic mining. Interview topics automatically mined based on an algorithm may include, for example and without limitation, interview topics oriented to long tail volatile traffic.
In some embodiments, the generated interview questions can include a question and a reference answer. Here, the topic can also be referred to as the topic stem portion of the interview topic. Further, optionally, the generated interview topics can also include topic attributes, such as positive keywords, negative keywords, heuristic guide tags, and type extension tags. The term "forward keywords" refers to keywords that can be positively scored for the topic of an interview topic. For example, for the topic "what is the largest bright spot you feel that you can stand out from many interviewers in the face of a competitive customer service industry? "forward keywords may include, for example," optimistically up "," active "," patience "," intimate "," engaging "," communication ability ", etc. The term "negative keywords" refers to keywords that can be negatively scored against the topic of the interview topic. For example, for the topic "have gone off-hours for 30 minutes and have output solutions to the client many times, but the client has not approved all the time, what do you then? "negative keywords may include" hang up talk "," hang up "," stop "," break "," insert talk ", etc. Heuristic guidance tags refer to tags for problem depth extension associations and type extension tags refer to tags for problem breadth extension associations. In the present specification, the heuristic guidance label is used to point to a question having a progressive relationship in question depth with a question currently in use, whereby the heuristic guidance label is used to make it possible to ask a interviewer a question in the question depth direction in association with another question, thereby deepening the question depth of an investigation question for the interviewer and making it possible to investigate the one-way capability depth of the interviewer. The type expansion tag is used for pointing to questions with the same or similar question types as the questions currently in use, so that the interview object can be asked with the type expansion tag by further associating other questions along the question breadth direction, thereby deepening the question breadth of the investigation question of the interview object and further investigating the capability coverage of the interview object. In this specification, the term "question type" refers to a classification of interview subjects. The question type may be customized. For example, for customer service interviews, the issue types may include service attitudes, stress tests, professional skills, and the like.
FIG. 3 illustrates an example structural diagram of an interview title according to an embodiment of the specification.
In the interview question example of FIG. 3, each interview question can include a question and a reference answer. In addition, each interview topic can also include topic attributes such as positive keywords, negative keywords, self-directed tags, and type-extended tags. It is noted that, in some embodiments, the subject attribute of each of the face test subjects may include one or more of the subject attributes described above.
It is noted that the use of heuristic guidance tags and type extension tags for problem depth extension and problem breadth extension are merely illustrative embodiments. In other embodiments, the problem depth expansion and problem breadth expansion may be performed in other suitable manners.
FIG. 4 illustrates an example flow diagram of an algorithm-based automated mining interview topic production process 400 in accordance with an embodiment of the present description. The interview question production process shown in figure 4 automatically mines interview questions based on historical interview dialog content.
As shown in fig. 4, at 410, dialog content recognition is performed on the historical interview dialog content to identify key dialog content in the historical interview dialog content. The historical interview dialogue content refers to complete interview dialogue data generated between the interview server equipment (or an interviewer) and an interview object, namely, the historical interview dialogue content is formed by a sentence of dialogue sent by the interview server equipment (or the interviewer) and a user. For example, when the historical interview session is a text-based session, the contents of the historical interview session may be text-recognized to identify key session contents in the contents of the historical interview session. When the historical interview conversation is in a voice form, the historical interview conversation content can be subjected to voice recognition to be converted into a text format, and then the converted historical interview conversation content is subjected to text recognition to recognize key conversation content in the historical interview conversation content.
At 420, topics (i.e., stem portions) and reference answers are generated from the identified key dialog content extraction. When the questions and the reference answers are extracted, the corpus in the historical interview conversation content can be divided in a single round according to the conversation turns, and each round of conversation content is used as a group. Then, for each group of corpora, extracting topics from the dialogue content of the interview server device (or interviewer), and extracting reference answers from the dialogue content of the interview object.
In some embodiments, the identified key dialogue content may be clustered and mined using a Kmeans and HDBSCAN three-stage clustering algorithm, so as to extract topics and corresponding reference answers. For example, first, a kmans algorithm is used to perform a first corpus clustering on the divided corpus portion to obtain a first corpus cluster. The resulting corpus purity in the first corpus cluster may not be high. And then, performing secondary corpus clustering in each first corpus cluster by using an HDBSCAN algorithm to obtain a second corpus cluster. After the second corpus clustering, a plurality of different second corpus clusters (sub-clusters of the first corpus cluster) can be obtained, and the second corpus cluster has higher purity, but the corpus cluster with higher similarity can exist. And then, carrying out similar cluster combination on the second corpus cluster to obtain an intention corpus cluster and a jargon corpus cluster. For example, the final intention corpus cluster and the final jargon corpus cluster may be obtained by calculating the similarity of the corpora at the center of each second corpus cluster (e.g., calculating the vector distance or semantic similarity), and performing similar cluster merging according to the corpus similarity. The resulting intent corpus is used as a topic and the resulting corresponding linguistic corpus is used as a reference answer.
At 430, positive and negative keywords for each topic are extracted from the historical dialog corpus for each topic. For example, for each topic, the words of the historical dialog corpus of the topic can be segmented, and the positive keywords and the negative keywords of the topic are extracted based on the Bayesian classifier according to the segmentation result.
At 440, the generated title is heuristically tagged with a bootstrap tag and a type extension tag via a tag generation algorithm.
At 450, the mined interview questions are stored in an interview question library. In one example, a manual spot check may be performed on mined interview subjects. And if the manual sampling inspection is qualified, storing the excavated interview questions into an interview question library. In another example, the mined interview questions are directly stored into the interview question library without manual sampling of the mined interview questions.
In some embodiments, the interview paper can be further organized into at least two structural modules, wherein the basic topics in each structural module are used for investigating the ability of the interview object in a specific direction. For example, in some embodiments, examples of structural modules may include, but are not limited to, a basal expression module, a personality search module, a job recognition module, and a competency search module, for example. The basic subjects in the basic expression module are used for inspecting the language expression capability of the interview object. The basic questions in the personality finding module are used for inspecting the personality of the interviewing object. And the basic subjects in the job approval module are used for inspecting the job approval degree of the interview object on the post to be recruited. The basic questions in the ability searching module are used for inspecting the position ability of the interview object. In addition, each structural module can also have different scoring weights respectively so as to reflect different influence degrees of the basic subjects in each structural module on the face test result. FIG. 5 illustrates an example schematic of a structured layout of interview papers according to an embodiment of the present description.
Returning to FIG. 2, after the interview test paper is generated as described above, at 220, an interview flow is initiated starting with the first basic topic in the interview test paper and is advanced based on each turn of answer by the interview subject until all basic topics in the interview test paper are completed.
Figure 6 illustrates an example flow diagram of an interview flow advancement method 600 according to an embodiment of the present description.
As shown in FIG. 6, at 610, the first basic topic of the interview test paper is provided to the interview subject for the interview subject to answer. Under the condition that the interview system consists of interview client equipment and interview server equipment, the interview server equipment provides the first basic question to the interview client equipment through a network, and the interview client equipment presents the first basic question to an interview object for answering. Under the condition that the interview system only consists of interview server equipment, the interview server equipment can present a first basic question to an interview object for answering.
The operations 620 through 660 are then executed in a loop until the interview subjects complete the response for all interview subjects.
Specifically, at 620, the answer content of the interview object for the current interview question is obtained. When the interview system consists of interview client equipment and interview server equipment, the interview client equipment acquires the answering content of an interview object aiming at the current interview, and provides the answering content to the interview server equipment through a network. Under the condition that the interview system only consists of interview server equipment, the interview server equipment directly obtains answering contents of an interview object.
In some embodiments, the response content of the interview object may be textual response content, for example, the interview object may enter the textual response content via an input interface of an interview application installed on the interview client device or the interview server device. In some embodiments, for example, in an application scenario such as customer service interview or other similar application scenarios, the interview subject needs to answer questions in a voice form, so that the answering content of the interview subject is the answering content in the voice form. In this case, the interview client device or the interview server device can acquire the answering content of the interview object via an audio/video acquisition device (means).
At 630, based on the obtained answer content, a topic score for the current test topic is determined.
In some embodiments, an answer match score between the obtained answer content and the reference answer may be determined. For example, the answer content and the reference answer may be converted into word vectors respectively, and then text matching is performed based on the converted word vectors, thereby obtaining a text matching degree score (i.e., an answer matching degree score) as a topic score of the current test question.
FIG. 7 illustrates an example flow diagram of a topic score determination process in accordance with embodiments of the present specification. In the embodiment illustrated in FIG. 7, each interview topic can have positive and negative keywords.
As shown in fig. 7, at 710, an answer matching degree score between the acquired answer content and the reference answer is determined.
At 720, positive and negative keyword coverage scores for the obtained response content are determined. For example, the number of coverages (the number of matches) of the positive keywords and the negative keywords in the acquired answer content may be calculated, and the positive keyword coverage score and the negative keyword coverage score may be calculated from the calculated number of coverages of the positive keywords and the calculated number of coverages of the negative keywords, respectively.
For example, for a forward keyword, if the number of overlays is 1, the forward keyword overlay score is 0.2. If the number of overlays is 2-4, then the forward keyword overlay score is 0.5. If the number of coverage is 5-8 times, the forward keyword coverage score is 1. If the number of overlays is greater than 8, the forward keyword overlay score is 2.
For negative keywords, if the number of coverage is 1, the negative keywords are covered by-0.2. If the number of overlays is 2-4, then the negative keyword overlay score is-0.5. If the number of coverage is 5-8 times, the negative keyword coverage score is-1. If the number of overlays is greater than 8, then the negative keyword overlay score is-2.
It is noted that the above determination process of positive and negative keyword coverage scores is merely exemplary. In other examples, other suitable manners may be employed to calculate positive and negative keyword coverage scores from the calculated number of coverage for positive keywords and the calculated number of coverage for negative keywords, respectively.
At 730, a topic score for the current test question is determined based on the determined answer match score, the positive keyword coverage score, and the negative keyword coverage score. For example, in one example, the answer match score, the positive keyword coverage score, and the negative keyword coverage score may be directly summed to obtain the topic score for the current test topic. In another example, the answer matching degree score, the positive keyword coverage score and the negative keyword coverage score may be weighted respectively, and then the answer matching degree score, the positive keyword coverage score and the negative keyword coverage score are weighted and summed to obtain the topic score of the current test question.
FIG. 8 illustrates an example flow diagram of a topic score determination process in accordance with embodiments of the present specification. In the example of fig. 8, the answer content of the interview object for the current question is audio/video content captured via an audio/video capture device. For example, the interviewer applies the customer service post.
As shown in fig. 8, at 810, a response matching degree score between the acquired response contents and the reference answer is determined.
At 820, positive and negative keyword coverage scores for the retrieved responsive content are determined.
At 830, a speech emotion score for the responsive content is determined by performing speech emotion analysis on the captured audio content.
For example, the collected audio content is divided into sentences according to the pause of the user, and the obtained sentences are time domain audio signals. Then, MFCC characteristics are calculated for each of the obtained sentence signals, and classification is performed using a multilayer CNN network, thereby determining emotion classification. Examples of sentiment classifications may include, for example, but are not limited to: surprise, fun, appreciation, calmness, dissatisfaction, confusion, disappointment, slight sight, etc. Then, a speech emotion score is calculated according to the classified emotion classification times.
For example, for the surprise classification, if the number of surprises is 1-2, the surprise score is 0.05. If the number of surprises is 3-6 times, the surprise score is 0.125. If the number of surprises is 7-10 times, the surprise score is 0.25. If the number of surprises is more than 10, the surprise score is 0.5.
For the comma-liked classification, if the number of commas is 1 to 2, the score of comma-liked is 0.05. If the number of times of being commad is 3 to 6, the score of being commad is 0.125. If the number of times of being commandeered is 7-10, the score of being commandeered is 0.25. If the number of times of being commad is more than 10, the score of being commad is 0.5.
For the reward classification, if the number of rewards is 1-2, the reward score is 0.08. If the number of apprehensions is 3-6, the appreciation score is 0.2. If the number of apprehensions is 7-10, the appreciation score is 0.4. If the number of awards is more than 10, the awards score is 0.8.
For the calmness classification, if the number of calmness is 1-2, the calmness score is 0.02. If the number of times of calmness is 3 to 6, the calmness score is 0.05. If the number of times of calmness is 7 to 10, the calmness score is 0.1. If the number of times of calmness is more than 10, the calmness score is 0.2.
In addition, dissatisfaction, confusion, disappointment, and the slight can be further classified as a negative voice intonation score. Therefore, the times of incompleteness, the times of disappointment and the times of slight are summed to obtain the times of negative voice intonation. Then, a negative voice intonation score is determined based on the negative voice intonation times.
For example, if the negative voice intonation number is 1-2 times, the negative voice intonation score is-0.2. If the negative speech intonation times are 3-6 times, the negative speech intonation is scored as-0.5. If the negative speech intonation number is 7-10, then the negative speech intonation is assigned-1. If the negative speech intonation number is greater than 10, then the negative speech intonation is scored as-2.
A speech emotion score is then determined based on the surprise score, the fun score, the recognition score, the calm score and the negative speech intonation score. For example, in one example, the surprise score, the fun score, the reward score, the calm score, and the negative speech intonation score may be directly summed to obtain the speech emotion score. In another example, the surprise score, the fun score, the like score, the calm score and the negative speech intonation score may be weighted, respectively, and then the surprise score, the fun score, the like score, the calm score and the negative speech intonation score may be weighted and summed to obtain the speech emotion score.
At 840, a facial expression score for the responsive content is determined by performing facial expression analysis on the video content.
For example, the captured video content is video decimated at a prescribed time step (e.g., 10 s), drawing a prescribed number of consecutive frame shots (e.g., consecutive 5 frame shots) per video decimation operation. The extracted frame shots are then provided to a deep neural network for expression classification. Examples of expression classifications may include, for example, but are not limited to: neutral, happy, surprised, hurting heart, producing qi, dislike, fear, etc. The most expression classification in the prescribed number of frame shots serves as the facial expression classification for the current video extraction operation. Here, the continuous multi-needle extraction is adopted to reduce the misjudgment rate of the user transient expression classification of the video screenshot.
When the facial expression score is determined, the positive expression score and the negative expression score are determined firstly, and then the facial expression score is determined according to the positive expression score and the negative expression score. In one example, the positive expression score and the negative expression score may be directly summed to obtain the facial expression score. In another example, the positive and negative expression scores may be weighted separately and then weighted and summed to obtain the facial expression score.
Neutral and happy may be classified as positive expressions, and the sum of the number of neutral and happy is the number of positive expressions. Surprise, heart hurting, anger, aversion and fear can be further classified as negative expression, and the sum of the times of surprise, heart hurting, anger, aversion and fear is the negative expression times. The positive expression score and the negative expression score may be determined based on the positive expression number and the negative expression number, respectively.
For example, if the number of forward expression times is 1-2, the forward expression score is 0.2. If the number of times of the positive expression is 3-6, the positive expression score is 0.5. If the number of positive expression times is 7-10, the positive expression score is 1. If the number of forward expressions is greater than 10, the forward expression score is 2.
If the negative expression frequency is 1-2, the negative expression score is-0.2. If the negative expression number is 3-6, the negative expression score is-0.5. If the negative expression number is 7-10, the negative expression score is-1. If the negative expression number is greater than 10, the negative expression score is-2.
After the answer match score, positive keyword cover score, negative keyword negative score, speech emotion score, and facial expression score are obtained as above, the topic score of the current test question is determined at 850 based on the answer match score, positive keyword cover score, negative keyword negative score, speech emotion score, and facial expression score.
For example, in one example, the answer match score, positive keyword coverage score, negative keyword negative score, speech emotion score, and facial expression score may be directly summed to arrive at the topic score for the current test topic. In another example, the answer matchability score, the positive keyword coverage score, the negative keyword negative score, the speech emotion score, and the facial expression score may be weighted separately, and then the answer matchability score, the positive keyword coverage score, the negative keyword negative score, the speech emotion score, and the facial expression score may be weighted and summed to obtain the topic score of the current test question.
Alternatively, in another example, the response score of the answering content may also be calculated according to the pause duration of the interview object in answering the question. For example, in one example, the response score = -0.5 = (total number of times response duration ≧ 10 s) -0.25 = (number of times response duration is [5s,10 s). Correspondingly, determining the question score of the current test question according to the answer matching degree score, the positive keyword coverage score, the negative keyword negative score, the response score, the speech emotion score and the facial expression score.
After the topic score for the current test topic is obtained as described above, at 640, the next test topic is determined from the interview paper, historical interview record, or interview topic library based on the topic score for the current test topic and the problem depth and problem breadth for the current test topic.
FIG. 9 illustrates an example flow diagram of an interview title determination process 900 according to embodiments of the specification.
As shown in FIG. 9, at 910, it is determined whether the topic score of the current test topic is greater than a corresponding score threshold. It is noted that each interview topic can have a respective score threshold. In other words, the threshold score for one test question may be the same as or different from the threshold score for the other test question.
If the topic score of the current test topic is greater than the corresponding score threshold, at 920, it is determined whether the problem depth of the current test topic reaches the problem guide depth of the basic topic of the problem chain where the current test topic is located. Here, each interview question in the interview question library may be provided with a question guide depth, whereby a base question selected from the interview question library may have a question guide depth. The problem depth of the current test question refers to the problem position of the current test question in the problem chain.
If the problem depth of the current test question reaches the problem guide depth of the basic question of the problem chain where the current test question is, the next basic question in the test paper is determined to be the next test question at 930. If the problem depth of the current test question does not reach the problem guide depth of the basic question of the problem chain where the current test question is, at 940, a related problem of the current test question is selected from the historical test question record to be used as the next test question.
In some embodiments, each base topic can include a heuristic guide tag for question depth extension association and have a question chain throughput threshold. In this case, when one associated question of the current test question is selected from the historical interview records as the next interview question, if the number of interview questions in the question chain where the current interview question is located and with the topic score larger than the corresponding score threshold is smaller than the question chain throughput threshold, one associated question pointed by the heuristic guide tag is selected from the historical interview records as the next interview question. And if the number of the interviewing questions with the topic scores larger than the corresponding score threshold value in the question chain of the current interviewing question is not smaller than the throughput threshold value of the question chain, recommending a sub-question of the question chain of the current interviewing question as a next interviewing question from the historical interviewing record. In some embodiments, in response to that the number of interview questions in the question chain with the topic score larger than the corresponding score threshold is not smaller than the question chain throughput threshold, a sub-question of the question chain with the current interview question may be recommended from the historical interview records as a next interview question in a user collaborative filtering manner.
If the subject score of the current test subject is not greater than the corresponding score threshold, at 950, it is determined whether the problem breadth of the current test subject reaches the problem breadth of the basic subject of the interview test paper corresponding to the current interview subject.
If the problem extent of the current interview question reaches the problem extent of the basic questions of the interview test paper corresponding to the current interview question, the next basic question in the interview test paper is determined to be the next interview question at 930.
If the problem breadth of the current interview question does not reach the problem breadth of the basic question of the interview test paper corresponding to the current interview question, a basic question with the same type as the basic question to which the current interview question belongs is selected from the interview question library as the next interview question in 960. In some embodiments, each topic includes a type extension tag for question breadth extension association. When a basic subject of the same type is selected from the interview subject library and used as the next interview subject, a basic subject of the same type pointed by the type extension tag can be selected from the interview subject library and used as the next interview subject.
And determining the next interview question according to the method, and performing problem depth correlation expansion and problem breadth correlation expansion on the basic question of the interview test paper under the condition that the basic question has problem depth and/or problem breadth, so as to obtain a question chain of the basic question.
FIG. 10 illustrates an example schematic of a topic chain of basic topics in an interview test paper according to an embodiment of the specification.
The topic chain shown in FIG. 10 is a topic chain for the base topic 1 in an interview test paper. As shown in FIG. 10, the resulting topic chains include three question chains, namely, question chain 1 based on basic topic 1, question chain 2 based on sub-basic topics 1-2, and question chain 3 based on sub-basic topics 1-3. Question chain 1 includes basic topic 1, question chain 2 includes sub-basic topics 1-2 and sub-questions 1-2-1, and question chain 3 includes sub-basic topics 1-3, sub-questions 1-3-1 and sub-questions 1-3-2. In the item chain, the sub-basic items 1-2 and 1-3 are interview items which are selected from an interview item library according to the type extension tags and have the same problem type as the basic item 1. Sub-question 1-2-1 and sub-question 1-3-1 are randomly selected ones of the sub-questions pointed to by the heuristic guide tag from the historical interview record. The sub-problem pointed to by the heuristic guide tag is a sub-problem of the same parent problem in the same application scenario. For example, sub-problem 1-2-1 is a sub-problem under the same application scenario under sub-base problem 1-2 in the historical interview record. The sub-problem 1-3-1 is a sub-problem under the same application scenario in the historical interview record under the sub-base problem 1-3. The sub-question 1-3-2 is a sub-question recommended in a user collaborative filtering manner from among the sub-questions (i.e., sub-question A, sub-question B and sub-question C) of the question chain in which the sub-question 1-3-2 is located in the history interview record.
Returning to FIG. 6, after the next test item determination is made as above, at 650, a determination is made as to whether there is a next test item. If a next interview question exists, the determined next interview question is provided to the interview subject for question answering at 660.
The interview flow guidance process according to the embodiment of the present specification is described below with the title chain shown in fig. 10 as an example. In the example of FIG. 10, the question breadth of the basic topic 1 is 3, the question guide depth of the sub-basic topics 1-2 is 3, and the question guide depth of the sub-basic topics 1-3 is 3.
As shown in fig. 10, after the basic topic 1 of the interview paper is provided to an interview subject for answering and the topic score of the basic topic 1 is determined according to the answering content, it is determined that the topic score of the basic topic 1 is not greater than the corresponding score threshold, and the current question breadth of the basic topic 1 is 1, so that a sub-basic topic 1-2 of the same type is selected from the interview topic library according to the type extension tag, thereby realizing the question breadth extension, and providing the sub-basic topic 1-2 as the next test topic to the interview subject for answering.
After the answering content of the interview object for the sub-basic topic 1-2 is obtained and the topic score of the sub-basic topic 1-2 is determined, the topic score of the sub-basic topic 1-2 is judged to be larger than a corresponding score threshold value, but the current problem depth of the sub-basic topic 1-2 is 1 and smaller than the problem guide depth of the sub-basic topic 1-2, so that the sub-problem 1-2-1 is randomly selected from the sub-problems of the sub-basic topic 1-2 pointed by the guide tag inspired from the historical interview record, and the sub-problem 1-2-1 is provided to the interview object as the next interview object to answer.
After the answering content of an interviewing object for the sub-question 1-2-1 is obtained and the question score of the sub-question 1-2-1 is determined, the question score of the sub-question 1-2-1 is judged to be not larger than a corresponding score threshold value, the question width of the sub-basic question 1-2 of a question chain where the sub-question 1-2-1 is located is 2 and smaller than the question width of the basic question 1, and therefore the sub-basic question 1-3 of the same type is selected from an interviewing question library according to the type extension tag, the problem width extension is achieved, and the sub-basic question 1-3 serves as the next interviewing object to be answered.
After the answering content of the interviewing object for the sub-basic questions 1-3 is obtained and the question scores of the sub-basic questions 1-3 are determined, the question scores of the sub-basic questions 1-3 are judged to be larger than the corresponding score threshold, but the current question depth of the sub-basic questions 1-3 is 1 and smaller than the question guide depth of the sub-basic questions 1-3, so that the sub-questions 1-3-1 are randomly selected from the sub-questions of the sub-basic questions 1-3 pointed by the guide labels inspired from the historical interviewing record, and the sub-questions 1-3-1 are provided to the interviewing object as the next interviewing question to answer.
After the answering content of the interviewee aiming at the subproblem 1-3-1 is obtained and the topic score of the subproblem 1-3-1 is determined, the topic score of the subproblem 1-3-1 is judged to be larger than the corresponding score threshold value, the current problem depth of the subproblem 1-3-1 is 2 and smaller than the problem guide depth of the subproblem 1-3, so that the subproblem A, the subproblem B and the subproblem C of the subproblem 1-3-1 are recorded from the historical interview, the subproblem 1-3-2 is recommended according to a user collaborative filtering mode, and the subproblem 1-3-2 is provided to the interviewee as the next interview item for answering.
After the answering content of the interview object for the sub-questions 1-3-2 is obtained and the question scores of the sub-questions 1-3-2 are determined, the question scores of the sub-questions 1-3-2 are judged to be larger than the corresponding score threshold, the current question depth of the sub-questions 1-3-1 is 3 and is equal to the question guide depth of the sub-basic questions 1-3, and therefore the basic question 2 is selected from the interview paper and serves as the next interview question to be provided to the interview object for answering.
If there is no next interview topic, i.e., all the underlying topics in the interview paper have been questioned, then proceed to 230. At 230, an interview score for the interview subject is determined based on the topic scores for each of the base topics in the interview paper.
For a basic topic in an interview test paper, if the basic topic does not have problem depth expansion and/or problem breadth expansion, the topic score of the basic topic is determined as a topic score determined based on the single basic topic. If the basic topic has a problem depth extension and/or a problem breadth extension, thereby associating an extended topic chain, determining a topic score for the basic topic based on the topic scores of the individual interview topics in the topic chain.
For example, first, the response score of each question chain in the topic chain of the base question is determined. The response score of each question chain is that the sum of the scores of all questions on the question chain is at the question depth of the question chain.
That is, the response score of each question chain can be calculated using the following formula:
question chain score = sum (question score of each question)/question depth of the question chain.
For example, for basic topic 1 shown in FIG. 10, the associated extended topic chains include three question chains, namely, question chain 1 based on basic topic 1, question chain 2 based on sub-basic topics 1-2, and question chain 3 based on sub-basic topics 1-3. Question chain 1 includes basic topic 1, question chain 2 includes sub-basic topics 1-2 and sub-questions 1-2-1, and question chain 3 includes sub-basic topics 1-3, sub-questions 1-3-1 and sub-questions 1-3-2. The problem depth for problem chain 1 is 1, the problem depth for problem chain 2 is 2, and the problem depth for problem chain 3 is 3.
According to the problem chain score calculation formula, the problem chain score of each problem chain in the question chain can be obtained. That is, question chain 1 score = basic topic 1 score/1. Question chain 2 score = ((base topic 1-2 score) + (sub-question 1-2-1 score))/2. Question chain 3 score = ((base topic 1-3 score) + (sub-question 1-3-1 score) + (sub-question 1-3-2 score))/3.
Then, according to the response scores of all the question chains, the question score of the basic question is determined. For example, base question 1 score = sum (question chain 1 score + question chain 2 score + question chain 3 score)/number of question chains.
In some embodiments, the interview test paper is organized into at least two structural modules, each structural module having a scoring weight. In this case, when calculating the score of the interview object, the interview score of the interview object is determined according to the question score of each basic question in the interview test paper and the score weight of the structure module where each basic question is located.
For example, in the case where interview papers are organized into a base expression module, a personality search module, a job approval module, and a competency search module as above, the interview score for an interview subject may be determined according to the formula: interview score = sum (basic expression topic score) basic expression weight + sum (personality search topic score) personality search weight + sum (professional recognition topic score) professional recognition weight + sum (competency search topic score) competency search weight.
The intelligent interview method according to the embodiments of the present specification is described above with reference to fig. 1 to 10.
By using the intelligent interview method, when interview is performed, aiming at each basic question, when the problem depth expansion or the problem breadth expansion needs to be performed on the basic question based on the answer content of the basic question, the associated interview question is selected from the historical interview record or the interview question library to ask the interview object to answer, but not to ask the interview object to ask the next basic question in the interview paper, so that the content of the interview question is more flexible, and the interview object can be comprehensively examined from the problem depth and the problem breadth around the basic question in the interview paper, so that the interview result can reflect the skill level and the position competence of the interview object.
By using the intelligent interviewing method, the question score of the interviewing object for the interviewing question is evaluated through the answer matching degree of the interviewing object for the answer content of the interviewing question and the positive keyword coverage degree and the negative keyword coverage degree of the answer content, so that the question score evaluation of the interviewing question is more accurate.
By using the intelligent interviewing method, the question score of the interviewing object for the interviewing question is evaluated through the answer matching degree of the interviewing object aiming at the answer content of the interviewing question, the positive keyword coverage degree and the negative keyword coverage degree of the answer content, the speech emotion score and the facial expression score, so that the investigation dimensionality of the question score evaluation of the interviewing question can be more consistent with the customer service application scene, and the accuracy of the question score evaluation is further improved.
By using the intelligent interviewing method, the topic score of the basic topic is evaluated by comprehensively carrying out problem depth expansion and/or problem breadth expansion on the basic topic of the interviewing paper to obtain the topic scores of all interviewing topics in the topic chain, so that the obtained topic scores can more accurately reflect the evaluation influence introduced by the problem depth expansion and/or the problem breadth expansion.
By means of the intelligent interview method, the interview examination paper is arranged into the plurality of structural modules, and each structural module is given with the score weight, so that interview investigation on an interview object is more comprehensive, and interview result calculation is more accurate.
Fig. 11 illustrates a block diagram of an intelligent interview apparatus 1100 according to embodiments of the present description. As shown in fig. 11, the intelligent interview apparatus 1100 includes an interview paper generation unit 1110, an answering content acquisition unit 1120, a topic score determination unit 1130, an interview topic determination unit 1140, an interview topic provision unit 1150, and an interview score determination unit 1160.
The interview paper generation unit 1110 is configured to generate interview papers. The operation of the interview paper generation unit 1110 may refer to the operation described above with reference to 210 of fig. 2.
The answer content obtaining unit 1120 is configured to obtain the answer content of the interview object for the current interview question. The operation of the answering content obtaining unit 1120 may refer to the operation described above with reference to 620 of fig. 6.
The topic score determining unit 1130 is configured to determine a topic score of the current test topic based on the acquired answer content.
Fig. 12 is a block diagram showing an implementation example of the title score determining unit 1200 according to the embodiment of the present specification. The topic score determination unit 1200 shown in fig. 12 may be applied to a customer service interview scene, and the answer content of the interview subject for the current interview topic includes audio/video content captured via an audio/video capture device. As shown in fig. 12, the topic score determining unit 1200 includes a click-to-match degree score module 1210, a keyword coverage score module 1220, a speech emotion score module 1230, a facial expression score module 1240, and a topic score determining module 1250.
The answer matching score module 1210 is configured to determine an answer matching score between the obtained answer content and the reference answer. The operations of the reply match score module 1210 may refer to the operations described above with reference to 810 of fig. 8.
The keyword coverage score module 1220 is configured to determine a positive keyword coverage score and a negative keyword coverage score for the obtained responsive content. The operations of the keyword coverage scoring module 1220 may refer to the operations described above with reference to 820 of FIG. 8.
The speech emotion score module 1230 is configured to determine the speech emotion score of the responsive content by performing speech emotion analysis on the audio content. The operation of the speech emotion scoring module 1230 may refer to the operation described above with reference to 830 of FIG. 8.
The facial expression score module 1240 is configured to determine the facial expression score of the response content by performing facial expression analysis on the video content. The operation of the facial expression score module 1240 may refer to the operation described above with reference to 840 of fig. 8.
The topic score determination module 1250 is configured to determine the topic score of the current test topic based on the determined answer match score, the positive keyword coverage score, the negative keyword coverage score, the speech emotion score and the facial expression score. The operation of the topic score determination module 1250 may refer to the operation described above with reference to 850 of FIG. 8.
In some embodiments, the topic score determination unit 1200 can also include a response score determination module (not shown). And the response score determining module calculates the response score of the answering content according to the pause duration of the interview object when the interview object answers the question. In this case, the topic score determination module 1250 is configured to determine the topic score of the current test topic from the determined answer match score, positive keyword coverage score, negative keyword coverage score, response score, speech emotion score and facial expression score.
In some embodiments, the topic score determination unit may not include the speech emotion score module and the facial expression score module. In this case, the topic score determination module determines the topic score of the current test topic according to the determined answer matching degree score, the positive keyword coverage score and the negative keyword coverage score.
Returning to fig. 11, after the topic score of the current test topic is obtained as described above, the interview topic determination unit 1140 determines the next interview topic from the interview paper, the historical interview record or the interview topic library according to the topic score of the current test topic and the problem depth and the problem width of the current test topic.
In some embodiments, in response to the question score of the current test question being greater than the corresponding score threshold and the question depth of the current test question reaching the question guide depth of the basic question of the question chain in which the current test question is located, the interview question determination unit 1140 determines the next basic question in the interview test paper as the next interview question.
In response to the question score of the current test question being greater than the corresponding score threshold and the problem depth of the current test question not reaching the problem guide depth of the basic questions of the problem chain in which the current test question is located, the question determining unit 1140 selects an associated problem of the current test question from the historical question records as the next test question.
In response to that the question score of the current test question is not greater than the corresponding score threshold and the question extent of the current test question reaches the question extent of the basic question of the interview test paper corresponding to the current interview question, the interview question determining unit 1140 determines the next basic question in the interview test paper as the next interview question.
In response to that the question score of the current test question is not greater than the corresponding score threshold and the question extent of the current test question does not reach the question extent of the basic question of the interview test paper corresponding to the current interview question, the interview question determining unit 1140 selects a basic question having the same type as the basic question to which the current interview question belongs from the interview question library as the next interview question.
In some embodiments, each base topic includes a heuristic guide tag for problem depth expansion association and has a problem chain throughput threshold. In response to the number of interview questions in the question chain with the topic score larger than the corresponding score threshold value being smaller than the question chain throughput threshold value of the question chain with the current interview question being, the interview question determination unit 1140 selects an associated question pointed by the heuristic guidance tag from the historical interview records as the next interview question. In response to that the number of interview questions with the topic score larger than the corresponding score threshold in the question chain of the current interview question is not smaller than the question chain throughput threshold of the question chain, the interview question determining unit 1140 recommends a sub-question of the question chain of the current interview question as a next interview question from the historical interview record. In some embodiments, in response to that the number of interview topics with topic scores greater than the corresponding score threshold in the question chain of the current interview topic is not less than the question chain throughput threshold of the question chain, the interview topic determination unit 1140 may recommend a sub-question as a next interview topic in a user collaborative filtering manner from the sub-questions of the question chain of the current interview topic in the historical interview record.
In some embodiments, each topic includes a type extension tag for question breadth extension association. In response to that the question score of the current test question is not greater than the corresponding score threshold and the problem breadth of the current test question does not reach the problem breadth of the basic question of the interview test paper corresponding to the current interview question, the interview question determination unit 1140 selects one basic question of the same type pointed by the type extension tag from the interview question library as the next interview question.
After the next interview question is determined as described above, the interview question providing unit 1150 provides the determined next interview question to the interview subject for question answering. The operation of the interview title providing unit 1150 may refer to the operation described above with reference to 660 of fig. 6.
Interview score determination unit 1160 is configured to determine an interview score for an interview subject based on the topic scores of the respective base topics in the interview paper in response to determining that the next interview topic is no longer present.
For a basic topic in an interview test paper, if the basic topic does not have problem depth extension and/or problem breadth extension, the interview score determination unit 1160 determines a topic score for the basic topic based on the topic score determined for the single basic topic. If the basic topic has a problem depth extension and/or a problem breadth extension, thereby associatively extending the topic chain, the interview score determination unit 1160 determines the topic score for the basic topic based on the topic scores for each interview topic in the topic chain.
In some embodiments, the interview test paper is organized into at least two structural modules, each structural module having a scoring weight. In this case, the interview score determining unit 1160 determines the interview score of the interview subject according to the question score of each basic question in the interview paper and the score weight of the structure module in which each basic question is located.
It is noted that in some embodiments, the intelligent interview apparatus 1100 may not include the interview paper generation unit 1100 and/or the interview score determination unit 1160.
As described above with reference to fig. 1 to 12, the intelligent interview method, the intelligent interview apparatus and the intelligent interview system according to the embodiments of the present specification are described. The intelligent interview device can be realized by hardware, software or a combination of hardware and software.
Fig. 13 shows a schematic diagram of a computer-based implementation of an intelligent interview apparatus 1300, according to an embodiment of the present description. As shown in fig. 13, intelligent interview apparatus 1300 can include at least one processor 1310, storage (e.g., non-volatile storage) 1320, memory 1330, and communication interface 1340, and the at least one processor 1310, storage 1320, memory 1330, and communication interface 1340 are connected together via a bus 1360. The at least one processor 1310 executes at least one computer program (i.e., the above-described elements implemented in software) stored or encoded in memory.
In one embodiment, a computer program is stored in the memory that, when executed, causes the at least one processor 1310 to: obtaining the answering content of the interview object aiming at the current interview question; determining the question score of the current test question based on the acquired answering content; determining the next interview question from an interview test paper, a historical interview record or an interview question library according to the question score and the question depth and the question width of the current interview question, wherein the interview test paper comprises at least two basic questions selected from the interview question library; and providing the determined next interview question to the interview object for question answering.
It should be appreciated that the computer programs stored in the memory, when executed, cause the at least one processor 1310 to perform the various operations and functions described above in connection with fig. 1-12 in the various embodiments of the present specification.
According to one embodiment, a program product, such as a computer-readable medium (e.g., a non-transitory computer-readable medium), is provided. The computer-readable medium may have a computer program (i.e., the elements described above as being implemented in software) that, when executed by a processor, causes the processor to perform various operations and functions described above in connection with fig. 1-12 in various embodiments of the present specification. Specifically, a system or apparatus may be provided which is provided with a readable storage medium on which software program code implementing the functions of any of the above embodiments is stored, and causes a computer or processor of the system or apparatus to read out and execute instructions stored in the readable storage medium.
In this case, the program code itself read from the readable medium can realize the functions of any of the above-described embodiments, and thus the computer-readable code and the readable storage medium storing the computer-readable code constitute a part of the present invention.
Examples of the readable storage medium include floppy disks, hard disks, magneto-optical disks, optical disks (e.g., CD-ROMs, CD-Rs, CD-RWs, DVD-ROMs, DVD-RAMs, DVD-RWs), magnetic tapes, nonvolatile memory cards, and ROMs. Alternatively, the program code may be downloaded from a server computer or the cloud by a communication network.
According to one embodiment, a computer program product is provided that includes a computer program that, when executed by a processor, causes the processor to perform the various operations and functions described above in connection with fig. 1-12 in the various embodiments of the present specification.
It will be understood by those skilled in the art that various changes and modifications may be made in the above-disclosed embodiments without departing from the spirit of the invention. Accordingly, the scope of the invention should be limited only by the attached claims.
It should be noted that not all steps and units in the above flows and system structure diagrams are necessary, and some steps or units may be omitted according to actual needs. The execution order of the steps is not fixed, and can be determined as required. The apparatus structures described in the above embodiments may be physical structures or logical structures, that is, some units may be implemented by the same physical entity, or some units may be implemented by a plurality of physical entities, or some units may be implemented by some components in a plurality of independent devices.
In the above embodiments, the hardware units or modules may be implemented mechanically or electrically. For example, a hardware unit, module or processor may comprise permanently dedicated circuitry or logic (such as a dedicated processor, FPGA or ASIC) to perform the corresponding operations. The hardware units or processors may also include programmable logic or circuitry (e.g., a general purpose processor or other programmable processor) that may be temporarily configured by software to perform the corresponding operations. The specific implementation (mechanical, or dedicated permanent, or temporarily set) may be determined based on cost and time considerations.
The detailed description set forth above in connection with the appended drawings describes example embodiments but is not intended to represent all embodiments which may be practiced or which fall within the scope of the appended claims. The term "exemplary" used throughout this specification means "serving as an example, instance, or illustration," and does not mean "preferred" or "advantageous" over other embodiments. The detailed description includes specific details for the purpose of providing an understanding of the described technology. However, the techniques may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described embodiments.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (16)

1. An intelligent interview method comprising:
obtaining the answering content of the interview object aiming at the current interview question;
determining the question score of the current test question based on the acquired answering content;
determining a next interview question from an interview test paper, a historical interview record or an interview question library according to the question score and the question depth and the question width of the current interview question, wherein the interview test paper comprises at least two basic questions selected from the interview question library; and
and providing the determined next test question to the interview object for question answering.
2. The intelligent interviewing method of claim 1 wherein determining the next interview question from an interview paper, a historical interview question record, or an interview question library based on the question score and the question depth or the question breadth of the current interview question comprises:
determining a next basic topic in the interview test paper as a next interview topic in response to the topic score being greater than a corresponding score threshold and the problem depth of the current interview topic reaching the problem guide depth of the basic topic of the problem chain in which the current interview topic is located,
selecting a related problem of the current test question from the historical interview records as a next interview question in response to the question score being larger than a corresponding score threshold value and the problem depth of the current test question not reaching the problem guide depth of the basic question of the problem chain where the current interview question is located,
determining a next basic subject in the interview test paper as a next interview subject in response to that the subject score is not greater than a corresponding score threshold and the problem breadth of the current interview subject reaches the problem breadth of the basic subject of the interview test paper corresponding to the current interview subject,
and responding to the problem score not larger than the corresponding score threshold value and the problem breadth of the current test question does not reach the problem breadth of the basic question of the interview test paper corresponding to the current interview question, and selecting a basic question with the same type as the basic question to which the current interview question belongs from the interview question library as the next interview question.
3. The intelligent interviewing method of claim 2 wherein each base topic includes a heuristic guide tag for question depth extension association and has a question chain throughput threshold,
in response to the question score being greater than the corresponding score threshold and the question depth of the current test question not reaching the question guide depth of the basic question of the question chain in which the current test question is located, selecting an associated question of the current test question from the historical interview records as a next test question comprising:
selecting one associated question pointed by the heuristic guide label from the historical interview records as a next interview question in response to the fact that the number of interview questions with the topic scores larger than the corresponding score threshold value in the question chain of the current interview question is smaller than the question chain throughput threshold value,
and in response to the fact that the number of the interview questions with the topic scores larger than the corresponding score threshold value in the question chain of the current interview questions is not smaller than the throughput threshold value of the question chain, recommending a sub-question of the question chain of the current interview questions as a next interview question from the historical interview record.
4. The intelligent interviewing method of claim 3, wherein recommending a sub-question of the question chain of the current interview question as a next interview question from the historical interview record in response to the number of interview questions in the question chain of the current interview question having a topic score greater than the corresponding score threshold not being less than the question chain throughput threshold comprises:
and in response to that the number of the interviewing questions with the topic scores larger than the corresponding score threshold value in the question chain of the current interviewing question is not smaller than the throughput threshold value of the question chain, recommending a subproblem of the question chain of the current interviewing question as a next interviewing question from the historical interviewing record according to a user collaborative filtering mode.
5. The intelligent interview method of claim 2 wherein each topic includes a type extension tag for question breadth extension association,
responding to the problem score is not larger than the corresponding score threshold value and the problem breadth of the current test question does not reach the problem breadth of the basic questions of the interview test paper corresponding to the current interview question, selecting one basic question of the same type from the interview question library as the next test question, wherein the step of selecting the basic question comprises the following steps:
and responding to the problem score is not larger than the corresponding score threshold value and the problem breadth of the current test question does not reach the problem breadth of the basic questions of the interview test paper corresponding to the current interview question, and selecting one basic question of the same type pointed by the type extension tag from the interview question library as the next interview question.
6. The intelligent interview method of claim 1 wherein each topic has positive keywords and negative keywords,
determining the topic score of the current test question based on the obtained answering content comprises:
determining an answer matching degree score between the acquired answer content and the reference answer;
determining a positive keyword coverage score and a negative keyword coverage score of the acquired answering content; and
and determining the question score of the current test question according to the determined answer matching degree score, the positive keyword coverage score and the negative keyword coverage score.
7. The intelligent interviewing method as claimed in claim 6, wherein said interviewing method is applied to customer service interviews, said interview subjects' answer content to current interview questions comprises audio/video content collected via an audio/video collecting device, said intelligent interviewing method further comprising:
determining a speech emotion score of the answering content by performing speech emotion analysis on the audio content; and
determining a facial expression score of the responsive content by performing facial expression analysis on the video content,
determining the topic score of the current test question according to the determined answer matching degree score, the positive keyword coverage score and the negative keyword coverage score comprises the following steps:
and determining the question score of the current test question according to the determined answer matching degree score, the positive keyword coverage score, the negative keyword coverage score, the speech emotion score and the facial expression score.
8. The intelligent interviewing method of claim 1, further comprising:
in response to determining that the next interview question is no longer present, determining an interview score for the interview subject based on the question scores for each of the base questions in the interview paper.
9. The intelligent interview method of claim 8 wherein the interview paper is organized into at least two structural modules, each structural module having a scoring weight,
determining the interview score of the interview object according to the question score of each basic question in the interview paper comprises the following steps:
and determining the interview score of the interview object according to the question score of each basic question in the interview paper and the score weight of the structure module where each basic question is located.
10. The intelligent interview method of claim 9, wherein the structure modules comprise a base expression module, a personality search module, a job recognition module, and a competency search module.
11. The intelligent interviewing method of claim 8, wherein the topic score for a base topic for which there is a problem depth extension and/or a problem breadth extension is determined by:
determining the response scores of all question chains in the question chains of the basic questions, expanding at least one sub-basic question through question breadth expansion of the basic questions, and performing question depth expansion on the basic question and each sub-basic question to obtain each question chain; and
and determining the question score of the basic question according to the response scores of all the question chains.
12. An intelligent interview apparatus comprising:
the answer content acquisition unit is used for acquiring the answer content of the interview object aiming at the current interview;
a question score determining unit for determining a question score of the current test question based on the acquired answering content;
an interview question determining unit for determining the next interview question from an interview test paper, a historical interview record or an interview question library according to the question score and the problem depth and the problem width of the current interview question, wherein the interview test paper comprises at least two basic questions selected from the interview question library; and
and the interview question providing unit is used for providing the determined next interview question to the interview subject to answer the question.
13. An intelligent interview system comprising:
interview customer service end equipment;
an interview server apparatus comprising the intelligent interview apparatus of claim 12; and
and the data storage equipment is used for storing the interview question library.
14. An intelligent interview apparatus comprising:
at least one processor for executing a program code for the at least one processor,
a memory coupled to the at least one processor, an
A computer program stored in the memory, the computer program being executable by the at least one processor to perform the intelligent interview method of any one of claims 1-11.
15. A computer readable storage medium storing executable instructions that when executed cause a processor to perform the intelligent interview method of any one of claims 1-11.
16. A computer program product comprising a computer program for execution by a processor to implement the intelligent interview method of any one of claims 1 to 11.
CN202210999967.XA 2022-08-19 2022-08-19 Intelligent interviewing method, intelligent interviewing device and intelligent interviewing system Pending CN115345591A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210999967.XA CN115345591A (en) 2022-08-19 2022-08-19 Intelligent interviewing method, intelligent interviewing device and intelligent interviewing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210999967.XA CN115345591A (en) 2022-08-19 2022-08-19 Intelligent interviewing method, intelligent interviewing device and intelligent interviewing system

Publications (1)

Publication Number Publication Date
CN115345591A true CN115345591A (en) 2022-11-15

Family

ID=83954336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210999967.XA Pending CN115345591A (en) 2022-08-19 2022-08-19 Intelligent interviewing method, intelligent interviewing device and intelligent interviewing system

Country Status (1)

Country Link
CN (1) CN115345591A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342329A (en) * 2023-05-25 2023-06-27 成都爱找我科技有限公司 One-stop service platform applied to wedding planning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342329A (en) * 2023-05-25 2023-06-27 成都爱找我科技有限公司 One-stop service platform applied to wedding planning
CN116342329B (en) * 2023-05-25 2023-08-18 成都爱找我科技有限公司 One-stop service platform applied to wedding planning

Similar Documents

Publication Publication Date Title
US10922991B2 (en) Cluster analysis of participant responses for test generation or teaching
CN110781663B (en) Training method and device of text analysis model, text analysis method and device
Herzig et al. Predicting customer satisfaction in customer support conversations in social media using affective features
US20230027526A1 (en) Method and apparatus for classifying document based on attention mechanism and semantic analysis
Van Nguyen et al. Enhancing lexical-based approach with external knowledge for Vietnamese multiple-choice machine reading comprehension
CN111311364B (en) Commodity recommendation method and system based on multi-mode commodity comment analysis
Novielli et al. The role of affect analysis in dialogue act identification
CN115345591A (en) Intelligent interviewing method, intelligent interviewing device and intelligent interviewing system
Ruposh et al. A computational approach of recognizing emotion from Bengali texts
CN115905187B (en) Intelligent proposition system oriented to cloud computing engineering technician authentication
Zhu et al. YUN111@ Dravidian-CodeMix-FIRE2020: Sentiment Analysis of Dravidian Code Mixed Text.
Gajanayake et al. Candidate selection for the interview using github profile and user analysis for the position of software engineer
Gurin Methods for Automatic Sentiment Detection
CN113505606B (en) Training information acquisition method and device, electronic equipment and storage medium
KR102309778B1 (en) System and Method for evaluation of personal statement using natural language processing technology
Sawant et al. Analytical and Sentiment based text generative chatbot
Gritz et al. On the Impact of Features and Classifiers for Measuring Knowledge Gain during Web Search-A Case Study
CN115687910A (en) Data processing method and device, computer equipment and readable storage medium
Suresh et al. Identifying teamwork indicators in an online collaborative problem-solving task: A text-mining approach
Velutharambath et al. UNIDECOR: a unified deception corpus for cross-corpus deception detection
Mednini et al. Natural language processing for detecting brand hate speech
Wu et al. Evaluating interaction content in online learning using deep learning for quality classification
Bal et al. An intelligent chatbot for admission system of an educational institute and prediction of user interest in taking admission
Islam et al. Skillbot: A conversational chatbot based data mining and sentiment analysis
Faturohman et al. Sentiment Analysis on Social Security Administrator for Health Using Recurrent Neural Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination