WO2022010255A1 - Procédé, système et support lisible par ordinateur permettant la déduction de questions approfondies destinées à une évaluation automatisée de vidéo d'entretien à l'aide d'un modèle d'apprentissage automatique - Google Patents

Procédé, système et support lisible par ordinateur permettant la déduction de questions approfondies destinées à une évaluation automatisée de vidéo d'entretien à l'aide d'un modèle d'apprentissage automatique Download PDF

Info

Publication number
WO2022010255A1
WO2022010255A1 PCT/KR2021/008644 KR2021008644W WO2022010255A1 WO 2022010255 A1 WO2022010255 A1 WO 2022010255A1 KR 2021008644 W KR2021008644 W KR 2021008644W WO 2022010255 A1 WO2022010255 A1 WO 2022010255A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
derived
question
evaluation
evaluator
Prior art date
Application number
PCT/KR2021/008644
Other languages
English (en)
Korean (ko)
Inventor
유대훈
이영복
Original Assignee
주식회사 제네시스랩
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 제네시스랩 filed Critical 주식회사 제네시스랩
Publication of WO2022010255A1 publication Critical patent/WO2022010255A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/105Human resources
    • G06Q10/1053Employment or hiring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06315Needs-based resource requirements planning or analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06316Sequencing of tasks or work
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0633Workflow analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0637Strategic management or analysis, e.g. setting a goal or target of an organisation; Planning actions based on goals; Analysis or evaluation of effectiveness of goals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/103Workflow collaboration or project management

Definitions

  • the present invention relates to a method, system and computer-readable medium for deriving an in-depth question for automated evaluation of an interview image using a machine learning model, and more particularly, a question about a specific competency to be evaluated to an evaluator to receive the response image performed by the evaluated by presenting and derive the first output information based on the corresponding response image, based on a plurality of behavioral indicators for the specific competency and one or more derived behavioral indicators included in the first output information to draw in-depth questions that can elicit the evaluator's answers to the behavioral indicators and incomplete behavioral indicators not included in the derived behavioral indicators, receive the video answering the in-depth questions from the evaluator, and finally evaluate the specific competency It relates to a method, a system and a computer-readable medium for doing so.
  • NCS National Competency Standards
  • the conventional methods for evaluating competency must be performed by an evaluator who has received specialized education on the evaluation method or has abundant experience. There is a problem in that it takes a lot of time to perform the evaluation, because it costs a lot of money, and the evaluator directly performs detailed procedures for evaluation even when the evaluation is conducted by an expert.
  • the evaluator determines that there is no content related to the competency in the respondent's answer in order to evaluate the competency of the evaluator, the evaluator creates a related question so that the evaluator can answer the relevant content and asks the question again This process requires more time for evaluation due to the additional process of presenting the question again and analyzing the answer.
  • the present invention relates to a method, system and computer-readable medium for deriving an in-depth question for automated evaluation of an interview image using a machine learning model, and more particularly, a question about a specific competency to be evaluated to an evaluator to receive the response image performed by the evaluated by presenting and derive the first output information based on the corresponding response image, based on a plurality of behavioral indicators for the specific competency and one or more derived behavioral indicators included in the first output information to draw in-depth questions that can elicit the evaluator's answers to the behavioral indicators and incomplete behavioral indicators not included in the derived behavioral indicators, receive the video answering the in-depth questions from the evaluator, and finally evaluate the specific competency
  • An object of the present invention is to provide a method, a system and a computer-readable medium for performing the
  • an embodiment of the present invention provides an automated evaluation method of an evaluator based on a behavioral indicator performed in a server system, wherein the server system includes a plurality of behavioral indicators and a plurality of questions for a specific capability.
  • each of the plurality of behavior indicators is characterized in that it has a correlation with one or more of the plurality of questions
  • the automated evaluation method includes: The first question providing step of providing one or more to the evaluated, and the evaluation of the specific competency of the evaluated by inputting the image of the answer performed by the evaluated person to the one or more questions provided in the first question providing step into the machine learning model a general question step including a first output information derivation step of deriving first output information including information and a derived behavioral indicator related to the evaluation information; an in-depth question setting step of setting one or more in-depth questions based on the one or more derived behavior indicators after the general question step is performed one or more times; and a competency evaluation step of performing an evaluation of the specific competency based on the answer image performed by the evaluator to the in-depth question and the first output information derived in the first output information derivation step; including, automated evaluation provide a way
  • the competency evaluation step includes a second question providing step of providing one or more of the in-depth questions set in the in-depth question setting step to the evaluated subject, and one or more in-depth questions provided in the second question providing step Second output information for deriving second output information including evaluation information on the specific competency of the evaluated person and a derived behavioral indicator related to the evaluation information by inputting the image of the answer performed by the evaluated person to the machine learning model an in-depth question step including a derivation step; and a comprehensive evaluation information derivation step of deriving comprehensive evaluation information for the specific competency of the evaluated person based on the first output information and the second output information.
  • the in-depth question setting step includes a plurality of behavior indicators set for the specific competency and one or more derived behavior indicators derived through the general question step among the plurality of behavior indicators. It is possible to determine one or more in-depth questions to determine a behavioral indicator that is not derived by the derived behavioral indicator, and to elicit an answer related to the behavioral indicator that is not derived from the derived behavioral indicator by the evaluated person.
  • the in-depth question setting step includes a plurality of behavior indicators set for the specific competency and one or more derived behavior indicators derived through the general question step among the plurality of behavior indicators. It is possible to determine one or more in-depth questions to determine a behavioral indicator that has been derived as a derived behavioral indicator, but does not meet the preset discrimination criteria, as an incomplete behavioral indicator, and allow the evaluator to derive an answer related to the incomplete behavioral indicator.
  • the first output information derived in the first output information derivation step is input to a machine learning-based deep question recommendation model to allow the evaluator to use the derived behavior index as the derived behavior index.
  • One or more in-depth questions can be derived to elicit answers related to behavioral indicators that have not been derived.
  • the step of deriving the first output information and the step of deriving the second output information includes separating image information and audio information from the answer image performed by the evaluator, and each of the separated image information and audio information can be pre-processed and input to the machine learning model.
  • the step of deriving the first output information and the step of deriving the second output information may include: deriving text information based on the answer image performed by the evaluator; performing embedding expressing the derived text information as a vector; and inputting the embedded vector into the machine learning model.
  • the first output information derived in the first output information derivation step and the second output information derived in the second output information derivation step are the discovery of the derivation behavior index related to the evaluation information
  • the text information of the answer image performed by the subject corresponding to the probability information and the discovery probability information may be further included.
  • the comprehensive evaluation information derived in the comprehensive evaluation information deriving step is the discovery probability information for each of the derived behavior indicators derived in the first output information deriving step and the second output information deriving step. It may include a score for the specific competency calculated by synthesizing them.
  • the comprehensive evaluation information derived in the step of deriving the comprehensive evaluation information is the discovery probability information, text information, and corresponding answer for the derived behavioral indicators included in the first output information and the second output information. It may include a score for the specific competency derived based on one or more information among the basic score information for the image and the feature information generated to derive the first output information and the second output information from the machine learning model. have.
  • the comprehensive evaluation information derived in the comprehensive evaluation information deriving step is the result information of pre-processing for each answer image input in the first output information deriving step and the second output information deriving step It may include a score for the specific competency calculated by synthesizing them.
  • the server system for performing an automated evaluation method of a person to be evaluated based on a behavior indicator, includes a plurality of behavior indicators and a plurality of questions for a specific capability.
  • each of the plurality of behavioral indicators is characterized in that it has a correlation with one or more of the plurality of questions, and provides one or more of the preset questions for performing the evaluation of the specific competency to the evaluator Studying the first question and answering the one or more questions provided in the first question-providing step are inputted into the machine learning model, and the evaluation information for the specific competency of the evaluated person and the evaluation information are related a general question unit including a first output information derivation unit for deriving first output information including a derivation behavior indicator; an in-depth question setting unit configured to set one or more in-depth questions based on the derived one or more derived behavior indicators after the general question unit is operated one or more times; and a competency evaluation unit that evaluates the specific competency based on the first output information derived from the image and the first output information derived from the answer image performed by the evaluator to the in-depth question; a server system including a to provide.
  • a plurality of behavior indicators and a plurality of questions are preset for a specific capability in the server system, each of the plurality of behavior indicators is characterized in that it has a correlation with at least one of the plurality of questions, and the automation
  • the first question-providing step of providing one or more of the preset questions for performing the evaluation of the specific competency to the assessee, and the one or more questions provided in the first question-providing step are performed by the assessee
  • a general question including a first output information derivation step of inputting an answer image into a machine learning model and deriving first output information including evaluation information on the specific competency of the evaluated person and a derived behavioral indicator related to the evaluation information step; an in-depth question setting step of setting one or more in-depth questions based on
  • the evaluation result is derived based on the image of the respondent's answer through the machine learning model for performing the evaluation of a specific competency, the time and cost required for the evaluation are reduced and the objective evaluation is at the same time It can have an effect that can lead to results.
  • the evaluation interface provided to the evaluator in the step of providing the evaluation interface includes a script layer, and the script layer displays a script according to the evaluator's answer image, so that the evaluator easily recognizes the evaluator's answer can be effective.
  • the behavior index list area for the corresponding question or specific competency is displayed. It is possible to exert the effect that the evaluator can easily select.
  • the evaluation interface includes a behavior indicator layer in which a specific region of a script selected by the evaluator from the script layer and a specific behavior indicator list of a behavior indicator list region selected from the behavior indicator list region is displayed, so that the evaluator It is possible to exert the effect of easily grasping the respondent's answer for each behavioral indicator.
  • the evaluation interface includes an in-depth question layer to which the evaluator receives an in-depth question according to the answer image of the evaluator and a singularity layer to receive special items about the answer image of the evaluator, so that the evaluator In the case of receiving education on the evaluation method, it can exert the effect of comparing in-depth questions and peculiarities written by experts on the evaluation method.
  • the evaluation result is derived by separating the image information and the audio information from the answer image of the evaluated person, and inputting the respective image information and the audio information into the machine learning model, the context in the answer image of the evaluated person And it is possible to exert the effect of deriving an accurate evaluation result by grasping the intent of the answer in detail.
  • the second evaluator capability information derived in the capability information derivation step through the machine learning model includes the discovery probability information for each behavioral indicator, the effect of objectively providing the evaluation result is obtained.
  • the second assessee capability information derived from the capability information derivation step through the machine learning model further includes text information in the respondent's answer image corresponding to the discovery probability information for each behavioral indicator. In other words, it can have the effect of providing concretely the respondent's answer corresponding to the behavioral indicator.
  • an in-depth question is set based on the derived behavioral indicator included in the first output information derived in the first output information derivation step and a plurality of behavioral indicators for specific capabilities, it is not observed without an evaluator It can have the effect of providing in-depth questions that can lead to answers to behavioral indicators that are not yet evaluated.
  • the comprehensive evaluation information derived in the comprehensive evaluation information derivation step includes a score for a specific competency calculated by synthesizing the discovery probability information in the first output information and the second output information, intuitively It can exert the effect of recognizing the evaluation result of the person being evaluated.
  • FIG. 1 schematically shows the form of an overall system for performing a method for providing an automated evaluation of an interview image using a machine learning model according to an embodiment of the present invention.
  • FIG. 2 schematically shows an internal configuration of a server system according to an embodiment of the present invention.
  • FIG. 3 schematically illustrates a configuration of a behavioral index set according to a specific competency to be evaluated and a question provided to a subject to be evaluated according to an exemplary embodiment of the present invention.
  • FIG. 4 schematically illustrates a method of providing an automated evaluation of an interview image using a machine learning model performed in a server system according to an embodiment of the present invention.
  • FIG. 5 schematically illustrates a screen in which an evaluator performs an answer to a question according to an embodiment of the present invention.
  • FIG 6 schematically shows the configuration of an evaluation interface according to an embodiment of the present invention.
  • FIG. 7 schematically shows a configuration in which a behavior indicator layer is displayed according to the selection of an evaluator in the script layer according to an embodiment of the present invention.
  • FIG 8 schematically shows the configuration of another type of evaluation interface according to an embodiment of the present invention.
  • FIG. 9 schematically illustrates a process of learning a machine learning model according to the model learning step according to an embodiment of the present invention.
  • FIG. 10 schematically shows a detailed configuration of a capability information derivation unit according to an embodiment of the present invention.
  • FIG. 11 schematically illustrates a method of deriving an in-depth question for automated evaluation of an interview image performed in a server system according to an embodiment of the present invention.
  • FIG 13 schematically shows detailed steps of the deep question setting step implemented in another method according to an embodiment of the present invention.
  • 16 schematically shows a configuration for deriving output information from a machine learning model by inputting an image of an answer performed by an evaluator to a machine learning model according to an embodiment of the present invention.
  • FIG. 17 schematically shows a configuration for setting an in-depth question according to output information derived by inputting an image of an answer performed by a subject to be evaluated into a machine learning model, and deriving comprehensive evaluation information according to an embodiment of the present invention.
  • FIG. 18 schematically illustrates a configuration for deriving comprehensive evaluation information by further including feature information derived from a machine learning model to which an image of an answer performed by an evaluated person is input according to an embodiment of the present invention.
  • FIG. 19 schematically shows the internal configuration of a feature extraction model according to an embodiment of the present invention.
  • FIG. 21 schematically illustrates an internal configuration of a computing device according to an embodiment of the present invention.
  • first, second, etc. may be used to describe various elements, but the elements are not limited by the terms. The above terms are used only for the purpose of distinguishing one component from another.
  • a first component may be referred to as a second component, and similarly, a second component may also be referred to as a first component. and/or includes a combination of a plurality of related listed items or any of a plurality of related listed items.
  • a "part” includes a unit realized by hardware, a unit realized by software, and a unit realized using both.
  • one unit may be implemented using two or more hardware, and two or more units may be implemented by one hardware.
  • ' ⁇ unit' is not limited to software or hardware, and ' ⁇ unit' may be configured to be in an addressable storage medium or to reproduce one or more processors. Accordingly, as an example, ' ⁇ ' indicates components such as software components, object-oriented software components, class components, and task components, and processes, functions, properties, and procedures. , subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays and variables.
  • components and ' ⁇ units' may be combined into a smaller number of components and ' ⁇ units' or further separated into additional components and ' ⁇ units'.
  • components and ' ⁇ units' may be implemented to play one or more CPUs in a device or secure multimedia card.
  • the 'evaluator's terminal', 'first evaluated terminal', 'second evaluated terminal' and 'evaluator's terminal' mentioned below may be implemented as a computer or portable terminal that can access a server or other terminal through a network.
  • the computer includes, for example, a laptop, a desktop, a laptop, etc. equipped with a web browser (WEB Browser), and the portable terminal is, for example, a wireless communication device that guarantees portability and mobility.
  • PCS Personal Communication System
  • GSM Global System for Mobile communications
  • PDC Personal Digital Cellular
  • PHS Personal Handyphone System
  • PDA Personal Digital Assistant
  • IMT International Mobile Telecommunication
  • CDMA Code) Division Multiple Access
  • W-CDMA W-Code Division Multiple Access
  • W-CDMA Wireless Broadband Internet
  • network refers to a wired network such as a local area network (LAN), a wide area network (WAN), or a value added network (VAN), or a mobile radio communication network or satellite. It may be implemented in any kind of wireless network such as a communication network.
  • the method of deriving deep questions for automated evaluation performed in the server system is to input the answer image into the machine learning model in the method of providing automated evaluation of the interview image using the machine learning model performed in the server system. It may correspond to a specific method for deriving an evaluation result.
  • the machine learning model is trained based on the information evaluated by the evaluator and , the overall method of deriving the evaluation result for the interview image through the machine learning model learned will be explained first.
  • FIG. 1 schematically shows the form of an overall system for performing a method for providing an automated evaluation of an interview image using a machine learning model according to an embodiment of the present invention.
  • the method for providing an automated evaluation of an interview image is performed by the server system 1000, and the server system 1000 includes an evaluator terminal 2000 corresponding to an external terminal to perform the method. ), the first terminal to be evaluated (3000), the second terminal to be evaluated (4000), and the terminal in charge of evaluation and education (5000) can communicate with each other.
  • the server system 1000 may include one or more servers, and each server may perform a method of providing an automated evaluation of an interview image by performing communication.
  • the evaluator terminal 2000 corresponds to a terminal used by the evaluator, which is a subject who performs evaluation based on the image of the answeree of the evaluator.
  • the evaluator receives an answer image from the server system 1000 through the evaluator terminal 2000 and performs evaluation.
  • the first evaluator competency information corresponding to the information evaluated by the evaluator may be used as learning data of a machine learning model to be described later.
  • the evaluator may correspond to a subject who performs an evaluation based on the interview image of the evaluator in a simulated manner in order to receive education on the evaluation method of the present invention
  • the first evaluator competency information is the evaluation education officer terminal (5000) It can be used as information for educating the evaluation method by the person in charge of evaluation education, who is the subject of use.
  • the first evaluator terminal 3000 corresponds to a terminal used by the first evaluator, who is the subject of answering a question provided through the server system 1000 . Specifically, the first evaluator receives one or more questions from the server system 1000 and answers the first evaluator terminal 3000 according to each suggested question, and the answer image performed by the first evaluator is the server system (1000). On the other hand, the first evaluator capability information can be derived by the evaluator receiving the answer image transmitted to the server system 1000 through the evaluator terminal 2000 and performing the evaluation as described above.
  • the second evaluator terminal 4000 corresponds to a terminal used by the second evaluator, who is the subject of answering the question provided through the server system 1000 .
  • the second evaluator receives one or more questions from the server system 1000 and answers the questions on the second evaluator terminal 4000 according to each suggested question, and the answer image performed by the second evaluator is the server system (1000).
  • the answer image performed by the second evaluator transmitted to the server system 1000 is input to the machine learning model, and the second evaluator capability information corresponding to the automated evaluation result for the answer image in the server system 1000 is provided.
  • the number of terminals to be evaluated communicating with the server system 1000 shown in FIG. 1 is only shown for ease of explanation, and the server system 1000 can communicate with one or more terminals to be evaluated. have.
  • the answer image performed by the first evaluator in the first evaluator terminal 3000 is not limited only to the evaluator performing the evaluation through the evaluator terminal 2000, and as described above, the machine of the server system 1000
  • the response image performed by the first evaluator may be input to the learning model to derive second evaluator competency information.
  • the answer image performed by the second evaluator in the second evaluator terminal 4000 is not limited to deriving the second evaluator competency information through the server system 1000, but is performed by the second evaluator.
  • the answer image may be provided to the evaluator and used for the evaluator to perform the evaluation.
  • the first and second assessees are for ease of explanation, and the descriptions of the first and the second do not imply a difference in configuration.
  • the evaluation education officer terminal 5000 is a terminal used by the evaluation education officer corresponding to the subject with expertise in the evaluation method based on the answer image.
  • the evaluation is performed through the evaluation education manager terminal 5000 and the evaluation result is transmitted to the server system 1000, and the evaluation result is provided to the subject receiving the evaluation method of the present invention, so that the subject can simulate Make it possible to compare the evaluation results with those evaluated by the person in charge of evaluation education.
  • the account types corresponding to each evaluator, the subject and the person in charge of evaluation and education exist on the server system 1000, and the account type corresponding to each of the rater, the subject and the person in charge of evaluation and education is used in a specific terminal. Communication with the server system 1000 may be performed, and the specific terminal may receive information corresponding to each account type and provide it to each subject.
  • FIG. 2 schematically shows an internal configuration of a server system 1000 according to an embodiment of the present invention.
  • the server system 1000 includes an evaluation interface providing unit 1100 , a competency information receiving unit 1200 , a model learning unit 1300 , a question providing unit 1400 , a competency information deriving unit 1500 and DB 1600 may be included.
  • the evaluation interface providing unit 1100 provides the evaluator with an image of the answer performed by the evaluator through the evaluator terminal, and provides an evaluation interface through which the evaluator receives the first evaluator competency information. Accordingly, the evaluator can view the answer image performed by the evaluator through the evaluation interface displayed on the evaluator terminal 2000 and simultaneously check the contents of his or her evaluation.
  • the competency information receiving unit 1200 receives the first evaluator competency information input by the evaluator through the evaluation interface from the evaluator terminal 2000 .
  • the model learning unit 1300 may play a role of learning the machine learning model, and for this purpose, the first subject competency information received from the competency information receiving unit 1200 may be used as learning data. More specifically, the model learning unit 1300 may train the machine learning model by processing the first subject competency information into a form suitable for learning the machine learning model.
  • the question providing unit 1400 provides one or more preset questions to the evaluator in order to evaluate the answer image for a specific capability in the server system 1000 . More specifically, the question providing unit 1400 may provide one or more questions related to the competency selected by the evaluated person, the company supported by the evaluated person, or the competency corresponding to the job of the company to the evaluated person.
  • the competency information derivation unit 1500 derives second evaluator competency information based on the image of the answer performed by the second evaluator to the question provided through the question providing unit 1400 .
  • the capability information derivation unit 1500 may derive the second assessee capability information by inputting the image of the answer performed by the second assessee to the machine learning model.
  • the competency information derivation unit 1500 sets an in-depth question related to the specific behavioral indicator, Comprehensive evaluation information corresponding to the second subject competency information may be derived by further considering the response image, which will be described in more detail with reference to FIG. 10 .
  • a machine learning-based deep question recommendation model for setting deep questions in the capability information derivation unit 1500 may be additionally stored in the DB 1600 .
  • the machine learning model is a machine-learning model for evaluation based on the answer image, and preferably, the machine learning model is individually provided for each competency to be evaluated, and the server system 1000 includes a plurality of machine learning models. Models may also be included.
  • the server system 1000 may include two or more servers, each server includes some of the above-described configurations, and each server performs communication to create a machine learning model. It is also possible to perform a method of providing an automated evaluation of the interview image by using it. For example, the functions provided to the evaluator or the evaluator are included in a specific server, and the machine learning model and the functions for learning the machine learning model are included in another specific server, so that communication between the specific server and the other specific server is performed. Through the use of the machine learning model of the present invention, a method of providing an automated evaluation of an interview image can be performed.
  • FIG. 3 schematically illustrates a configuration of a behavioral index set according to a specific competency to be evaluated and a question provided to a subject to be evaluated according to an exemplary embodiment of the present invention.
  • one or more behavioral indicators and one or more questions may be set for each competency to be evaluated in order to evaluate the competency of the subject.
  • the behavioral indicator is an evaluation standard for evaluating the competency, and the evaluator can evaluate the extent to which the evaluated person possesses the corresponding competency by checking the responses in which the behavioral indicator is observed in the evaluated answer.
  • the question is designed so that one or more behavioral indicators can be observed in the respondent's answer. For example, in the respondent's answer to the question of 'How did you resolve conflicts between team members?', a behavioral index of 'Inducing team members to collaborate for the purpose of the team' can be observed.
  • each question may be designed in a form that can induce answers to one or more of a situation, a task, an action, and a result.
  • the questions designed for each competency as described above may be provided to the first evaluated or the second evaluated through the first evaluated terminal 3000 or the second evaluated terminal 4000 through the question providing unit 1400, respectively.
  • the question providing unit 1400 is the company to which the appraiseee has applied, the company that wants to conduct a mock interview, or Depending on the job of the company to which the applicant has applied or the company for which the mock interview is to be conducted, appropriate questions may be provided to the subject to be evaluated.
  • FIG. 4 schematically illustrates a method of providing an automated evaluation of an interview image using a machine learning model performed by the server system 1000 according to an embodiment of the present invention.
  • the server system 1000 provides the evaluation interface including the response image performed by the first evaluator to the evaluator, the evaluation interface providing step (S10).
  • the first evaluated person may request the evaluation from the server system 1000 through the first evaluated terminal 3000 , and the question providing unit 1400 of the server system 1000 may ask one or more questions corresponding to the request. is provided to the first evaluator terminal 3000 so that the first evaluator can generate an answer image.
  • the response image performed by the first evaluator generated in this way may be transmitted to the server system 1000 and stored in the DB 1600 .
  • the server system 1000 may perform the evaluation interface providing step ( S10 ).
  • the evaluation interface including the answer image performed by the first evaluator corresponding to the request of the evaluator is displayed on the evaluator terminal 2000 .
  • the evaluator is the first evaluator competency including the evaluation information on the specific competency related to the answer image performed by the first evaluator through the evaluation interface displayed on the evaluator terminal 2000 and the behavioral index corresponding to each evaluation information.
  • Information is input, and the evaluator terminal 2000 transmits the input first evaluator competency information to the server system 1000, and the competency information receiving unit 1200 of the server system 1000 performs the competency information receiving step ( S11) is performed to receive the first assessee competency information.
  • the evaluation information included in the first evaluator's competency information may correspond to information for observing the corresponding behavioral index among the answers of the first evaluator.
  • the model learning step (S12) may learn a machine learning model for a specific competency based on the plurality of first evaluator capability information received through the capability information receiving step (S11). Since the server system 1000 includes one or more machine learning models that perform evaluation for each competency, when learning a machine learning model of a specific capability in the model learning step (S12), the first Separate labeling is performed using the subject competency information as learning data, or the model learning step (S12) is performed so that the first evaluator competency information for a specific competency and the first evaluator competency information for other competencies other than the specific competency are distinguished. , the labeled first assessee competency information can be used as learning data.
  • the machine learning model may use the answer images corresponding to each of the first subject competency information used as the learning data as the learning data.
  • the server system 1000 provides a question providing step of providing one or more preset questions corresponding to the request to the second evaluator (S13) is performed.
  • the evaluator requests evaluation from the server system 1000 from the evaluator. It may correspond to a request divided into a request for direct evaluation and a request for evaluation through the machine learning model of the server system 1000, or to request both evaluation from the evaluator and evaluation through the machine learning model. may be
  • the request of the first evaluated or the second evaluated may include information requesting a specific company to be evaluated, a job of a specific company, or a specific competency.
  • the second evaluator who has requested evaluation through the question providing step S13 is provided with one or more questions and generates an answer image through the second evaluator terminal 4000 .
  • the second evaluator terminal 4000 transmits the generated answer image performed by the second evaluator to the server system 1000, and the server system 1000 performs machine learning on the received reply image performed by the second evaluator.
  • the capability information derivation step (S14) of deriving the second evaluator capability information by input to the model is performed.
  • the second assessee capability information derived through the capability information derivation step (S14) is capability information derived based on the answer image performed by the second assessee in the server system 1000 by itself, and is evaluated by the evaluator In such a way as to derive information in a form similar to that of a first evaluator’s competency information or include information on the probability of discovery of one or more behavioral indicators for a specific competency to be evaluated in the response image performed by the second evaluator, the first It may be derived in a different form from the subject's competency information.
  • the first output information is derived with respect to the answer image performed by the second evaluated person, and one or more derived behavior indicators included in the first output information and a plurality of specific capabilities to be evaluated Based on the behavioral indicators of By deriving the second output information for the response image to , it is possible to finally derive the comprehensive evaluation information corresponding to the second subject competency information, which will be described in more detail with reference to FIG. 11 .
  • the plurality of second evaluator competency information derived in the competency information derivation step (S14) is input to the comprehensive machine learning model, and the score information on the degree of retention of the specific competency of the second evaluator is obtained. It may further include; a comprehensive competency information derivation step (S15) of deriving the comprehensive subject competency information that includes.
  • the response image performed by the second evaluator is input to the machine learning model to derive the second evaluator competency information, and the second evaluator's specific competency is evaluated may be made, but in another embodiment of the present invention, in the capability information derivation step (S14), the second assessee capability information for each answer image performed by the second assessee for each of a plurality of questions about a specific capability is derived. .
  • the comprehensive subject competency information is derived based on the plurality of second evaluator competency information derived in the step of deriving the competency information (S14).
  • the plurality of second subject competency information may be input to the comprehensive machine learning model included in the server system 1000 to derive the comprehensive subject competency information.
  • the comprehensive evaluator competency information synthesizes a plurality of second evaluator competency information derived for each answer image to each of a plurality of questions provided to the second evaluator, and comprehensively calculates the degree of retention of a specific competency that the second evaluator intends to evaluate. Similar to the evaluation score input by the evaluator on the evaluation interface, the comprehensive evaluated competency information includes score information on the degree of possessing a specific competency. It can exert the effect of quantitatively recognizing the degree of possessing the specific competency of the evaluated person.
  • the comprehensive machine learning model corresponds to a separate machine learning-based model distinguished from the above-described machine learning model, or the comprehensive machine learning model and the machine learning model are included in the overall machine learning model, the machine learning
  • the second subject competency information derived from the model may be input to the comprehensive machine learning model to derive comprehensive subject competency information.
  • FIG. 5 schematically illustrates a screen in which an evaluator performs an answer to a question according to an embodiment of the present invention.
  • the first evaluated terminal 3000 or the second evaluated terminal 4000 is provided with one or more questions through the question providing step ( S13 ) performed by the server system 1000 to receive an answer image can create
  • the question providing step ( S13 ) one or more preset questions corresponding to the requests of the first or second evaluated persons are provided to the first evaluated terminal 3000 or the second evaluated terminal 4000 .
  • the question providing step S13 may provide questions related to one or more capabilities related to the job of the specific company.
  • the first terminal to be evaluated 3000 or the second terminal to be evaluated 4000 to which the question is provided captures an image of the person's answer to the question through a photographing module provided in the terminal.
  • the question provided in the question providing step (S13), the time limit for answering the question, and the answer progress time are displayed at the bottom, and at the top shows a configuration in which an image of the respondent's answer is displayed in real time.
  • the present invention is not limited thereto, and the question is displayed first and then the screen is switched to display only the real-time answer image of the subject, or the question is provided not only in text form but also in sound form.
  • a screen may be configured in various display methods.
  • the first evaluated terminal 3000 and the second evaluated terminal 4000 generate an image of an answer performed by the evaluated person to one or more questions provided through the question providing step S13, and the generated answer By transmitting the image to the server system 1000 , evaluation of the corresponding answer image may be performed.
  • FIG 6 schematically shows the configuration of an evaluation interface according to an embodiment of the present invention.
  • the evaluation interface may be displayed on the evaluator terminal 2000 through the step S10 of providing the evaluation interface performed by the server system 1000 .
  • the evaluation interface elements for the evaluator to evaluate based on the answer image performed by the first evaluator are displayed, and the corresponding answer image can be evaluated according to the evaluator's input.
  • the evaluation interface includes an answer image layer L1 in which an answer image performed by the first evaluated person is displayed.
  • the corresponding answer image is reproduced according to the replay input of the evaluator on the answer image layer (L1), so that the evaluator can check the contents of the answer image.
  • a question about the answer image more specifically, the question provided to generate the answer image in the question providing step (S13) is displayed in text form, so that the evaluator can It is possible to recognize more clearly which question the answer image is generated by.
  • the evaluation interface provided to the evaluator in the step of providing the evaluation interface (S10) includes a script layer (L2) in which a script generated based on the answer image performed by the first evaluator is displayed; including, the script layer In (L2), when the evaluator selects a specific area of the script, a behavioral indicator list area A1 including one or more behavioral indicators corresponding to the corresponding question or the specific competency may be displayed.
  • a script layer (L2) in which a script generated based on the answer image performed by the first evaluator is displayed; including, the script layer In (L2), when the evaluator selects a specific area of the script, a behavioral indicator list area A1 including one or more behavioral indicators corresponding to the corresponding question or the specific competency may be displayed.
  • the script layer (L2) displays a script in which the content of the answer image displayed on the answer image layer (L1) is converted into text form.
  • the server system 1000 may include a Speech to Text (STT) module that converts audio information of an answer image into text information, and may derive a script for the answer image through the STT module.
  • the server system 1000 further includes a video and audio separation module to separate the video information and audio information of the answer image through the video and audio separation module, and to separate the audio information from the STT module. You can also derive a script by inputting it into . Therefore, the evaluator can clearly grasp the voice that is not clearly recognized in the answer image reproduced in the answer image layer (L1) in the form of text through the script layer (L2).
  • the script is generated by not only the STT module but also the evaluator playing the corresponding answer image on the script layer (L2) and directly inputting the script, or the script primarily generated in the STT module is on the script layer (L2) is displayed, and the evaluator may finally generate the script by correcting the content of the primarily generated script.
  • a specific area of the script may be selected by an input such as drag performed by an evaluator, and a specific area of the script is selected in the script layer (L2).
  • a behavior indicator list area (A1) containing one or more behavior indicators related to a question or a specific competency to be evaluated is displayed.
  • the evaluator may select a behavior indicator related to a specific region of the script selected by the evaluator, and the specific region of the selected script may be displayed on a behavior indicator layer L6 to be described later. This will be described later with reference to FIG. 7 .
  • the evaluation interface includes a score evaluation layer L3, and the score evaluation layer L3 allows the evaluator to receive a comprehensive evaluation score of a specific competency for the corresponding answer image.
  • the score evaluation layer L3 allows the evaluator to receive a comprehensive evaluation score of a specific competency for the corresponding answer image.
  • the evaluator selects the evaluation score area displayed on the score evaluation layer L3, one or more preset evaluation scores are displayed.
  • the preset one or more evaluation points may display one or more evaluation points set at 0.5 point intervals in a range of 1 to 5 points.
  • the corresponding evaluation score may be input and displayed on the score evaluation layer L3.
  • the evaluation interface provided to the evaluator allows the evaluator to display the specific behavioral indicator when a specific behavioral indicator corresponding to the question or the specific competency is not observed in the script.
  • a deep question layer (L4) that receives a separate in-depth question for deriving; and a singularity layer (L5) for allowing the evaluator to input specific details about the answer image performed by the first evaluated person.
  • the in-depth question layer (L4) is a function for observing a specific behavioral indicator when the evaluator determines that a specific behavioral indicator is not observed among one or more behavioral indicators for a specific competency that the evaluator wants to evaluate with respect to the corresponding answer image.
  • In-depth questions to elicit answers can be input from the evaluator.
  • the in-depth question layer L4 may additionally receive the contents that the evaluator wants to ask the first evaluator in addition to the above-described in-depth question.
  • the specific item layer (L5) may receive a specific item for the answer image displayed on the answer image layer (L1) from the evaluator.
  • the evaluator can input specific information about the response image into the specific information layer (L5), such as 'the truth of the response content is doubtful when seeing a person showing embarrassment such as blurring the end of speech' for the response image.
  • the input specific information may be included in the first assessee competency information.
  • information input by the evaluator on the evaluation interface may be included in the above-described first subject competency information, the first evaluated capacity information is provided to the first evaluator, and the model learning step (S12) It can be used to train machine learning models in
  • the deep question input by the evaluator on the deep question layer (L4) can be used as learning data of the deep question recommendation model that derives the deep question according to the answer image of the evaluator, and more specifically, the deep question layer (L4) ), the in-depth question input by the evaluator corresponds to a question to elicit the second evaluator's answer to the behavioral indicator that was not observed in the answer image performed by the second evaluator. Behavioral indicators may be used as learning data of the deep question recommendation model.
  • an expert comparison element may be displayed on the evaluation interface in another embodiment of the present invention.
  • the evaluator makes a selection input for the expert comparison element
  • the contents of the evaluation performed by the expert on the evaluation method of the present invention on the reply image displayed on the reply image layer (L1) of the evaluation interface will be displayed.
  • the contents input by the expert may be displayed corresponding to a specific area of the script for each of the one or more behavioral indicators. The contents can be compared with the evaluation contents of experts.
  • FIG. 7 schematically shows a configuration in which the behavior indicator layer L6 is displayed according to the selection of the evaluator in the script layer L2 according to an embodiment of the present invention.
  • the evaluation interface may further include a behavior indicator layer L6 in which text corresponding to a specific area of the script displayed on the script layer L2 selected by the evaluator is displayed.
  • the evaluation interface further includes a behavior indicator layer (L6), and the behavior indicator layer (L6) may display text corresponding to a specific area (B1) of the script selected by the evaluator. More specifically, when the evaluator selects (B1) a specific area of the script displayed on the script layer (L2), the action index list area (A1) including one or more action indicators corresponding to the question or the competency to be evaluated ) is displayed on the script layer L2, and when the evaluator selects a specific behavior indicator B2 from the behavior indicator list area A1, the specific region of the script selected by the evaluator on the behavior indicator layer L6 Text corresponding to (B1) may be displayed.
  • a behavior indicator layer L6
  • the behavior indicator layer (L6) may display text corresponding to a specific area (B1) of the script selected by the evaluator.
  • the behavior indicator layer (L6) when the evaluator selects a specific behavior indicator (B2) from the behavior indicator list region (A1) displayed for the specific region (B1) of the selected script, the behavior Text corresponding to the specific area B1 of the selected script may be displayed at a position corresponding to the specific behavior indicator B2 displayed on the indicator layer L6.
  • the behavioral indicator layer L6 includes the specific behavioral indicator B2 selected by the evaluator and the corresponding specific behavioral indicator.
  • the text corresponding to the specific area B1 of the script is displayed. In Fig.
  • the behavior indicator layer (L6) one or more behavior indicators corresponding to a question related to an answer image or a competency to be evaluated are displayed in advance, and the evaluator selects a specific behavior indicator from the behavior indicator list area (A1).
  • the text of a specific area of the script selected by the evaluator may be displayed at a position (bottom in FIG.
  • the evaluator can conveniently select a behavior index matching the selected area on the script, and since the selected action index and a specific area of the script are separately displayed on the behavior index layer (L6), the evaluator can In carrying out the evaluation based on the response image, the time required for structuring the directly evaluated responses according to the behavioral indicators can be saved, and the effect of performing the evaluation more smoothly can be exhibited.
  • FIG 8 schematically shows the configuration of another type of evaluation interface according to an embodiment of the present invention.
  • the form of the evaluation interface provided to the evaluator is not limited to the form shown in FIG. 6 , and may be configured in the form shown in FIG. 8 or other form and provided to the evaluator.
  • the answer image layer (L10) and the script layer (L11) are located at the top of the evaluation interface, and the evaluator is displayed on the answer image and script layer (L11) displayed on the answer image layer (L10) Based on the script to be used, the specific content of the script for each behavioral indicator can be selected. On the other hand, when the evaluator selects a specific area of the script on the script layer L11, the behavior index list area A10 may be overlaid on the script layer L11.
  • the content input by the evaluator is configured to be inputted at the bottom of the evaluation interface. Therefore, the evaluator is configured to check the contents of the respondent's response image located at the top of the evaluation interface and input the contents of the answer image in the lower area of the evaluation interface.
  • the behavior indicator layer L12 is located at the lower left of the evaluation interface, so that the contents of the script for each behavior indicator input by the evaluator on the script layer L11 can be displayed.
  • an in-depth question layer (L13), a singularity layer (L14), and a score evaluation layer (L15) are sequentially arranged, and after the evaluator enters an in-depth question and a specific matter, finally the corresponding answer image
  • An evaluation score may be input on the score evaluation layer L15, and the evaluation score input on the score evaluation layer L15 may be displayed in the area A11.
  • FIG. 9 schematically illustrates a process of learning a machine learning model according to the model learning step (S12) according to an embodiment of the present invention.
  • the model learning unit 1300 learns the machine learning model based on the above-described first subject competency information and is a reinforced machine.
  • a model learning step (S12) of updating the learning model is performed.
  • one or more behavior indicators corresponding to the specific competency included in the first evaluator competency information and the specific script selected by the evaluator for each behavior indicator You can learn by inputting an area.
  • the machine learning model may perform evaluation on a specific capability, and thus, the server system 1000 may include one or more machine learning models for each capability.
  • the first evaluator competency information for the specific competency that is, the first evaluator competency input by the evaluator to the response image performed by the first evaluator for the specific competency Only the information is used as learning data for learning the machine learning model, or in the model learning step ( S12 ), labeling is performed on the first subject competency information for each of a plurality of competencies, and the labeled first evaluator competency information is used as the learning data, and the first evaluator competency information for other competencies rather than the first evaluated competency information for the specific competency evaluated by the machine learning model may be used as the learning data.
  • one or more behavioral indicators corresponding to the specific competency included in the first evaluator competency information and a specific area selected by the evaluator in the script for each of the one or more behavioral indicators are input to the machine learning model through the machine learning model learned in the capability information derivation step (S14), it is possible to derive the second assessee competency information including the discovery probability information for each behavioral indicator.
  • the machine learning model can be trained by using the answer image corresponding to the first evaluator competency information as additional learning data, and the machine learning model learned according to this It is also possible to perform evaluation by analyzing facial expressions and emotions.
  • the machine learning model is learned using only one or more behavioral indicators corresponding to the specific competency included in the first evaluator competency information as learning data, and additionally, the second The machine learning model may be trained using, as additional learning data, a specific area of the script selected by the evaluator with respect to the evaluation score included in the first evaluator competency information or each behavioral index included in the first evaluator competency information.
  • model learning unit 1300 may learn the above-described deep question recommendation model, and specifically, the model learning unit 1300 uses the deep question input by the evaluator included in the first evaluator competency information as learning data.
  • the deep question recommendation model can be trained.
  • an in-depth question is set according to the answer image performed by the second evaluator in order to derive the second evaluated competency information, and the second evaluator performs the set in-depth question
  • the method of deriving the evaluation result by considering one answer image additionally will be explained.
  • the first and second evaluators Although it has been described separately as 1st evaluator competency information and 2nd evaluator competency information, the following describes the evaluation based on the answer image in the server system, so the evaluator described below may correspond to the above-mentioned second evaluator, The comprehensive evaluation information described below may correspond to the above-described second evaluator competency information or comprehensive evaluator capacity information.
  • FIG. 10 schematically shows a detailed configuration of the capability information derivation unit 1500 according to an embodiment of the present invention.
  • the steps for setting an in-depth question based on the answer image performed by the evaluator and deriving the comprehensive evaluation information according to the answer image performed by the evaluator for the in-depth question are the capability information derivation unit (1500).
  • the competency information derivation unit 1500 includes a general questioning unit 1510, and the general questioning unit 1510 primarily asks the evaluator one or more questions about the specific competency in order to evaluate the specific competency.
  • the person to be evaluated makes a request for evaluation of a specific competency through the terminal of the evaluated person, an interview evaluation for a company that the evaluated person wants to apply for, or an interview evaluation for a job at a company that the evaluated person wants to apply for.
  • the one or more questions may correspond to a question designed so that the one or more behavioral indicators can be observed in the respondent's answer to one or more behavioral indicators related to a specific competency.
  • the person to be evaluated may be provided with one or more questions provided by the first question-and-question study 1511 through the terminal of the person to be evaluated, and an image of answers to the one or more questions may be generated through the terminal of the person to be evaluated. Thereafter, the evaluator terminal transmits the generated answer image to the server system 1000 .
  • the first output information derivation unit 1512 derives first output information by inputting the image of the answer performed by the evaluator received from the server system 1000 into the above-described machine learning model. More specifically, the first output information may include evaluation information for the specific competency based on the answer image performed by the evaluated person through a machine learning model and a derived behavioral indicator related to the evaluation information.
  • the capability information derivation unit 1500 may further include an in-depth question setting unit 1520, and the in-depth question setting unit 1520 is the first output information derived from the first output information derivation unit 1512. Based on this, in-depth questions to be provided to the subject to be evaluated are derived.
  • the in-depth question setting unit 1520 allows the subject to be evaluated for a behavioral indicator that does not correspond to the derived behavioral indicator included in the first output information in the plurality of behavioral indicators corresponding to the specific capability. In-depth questions that can lead to answers related to behavioral indicators can be drawn.
  • the in-depth question setting unit 1520 may derive a question related to a behavioral indicator that does not correspond to an in-depth question from one or more questions related to a plurality of behavioral indicators for a specific capability preset in the server system 1000.
  • the competency information derivation unit 1500 may further include a competency evaluation unit 1530, and the competency evaluation unit 1530 provides the in-depth question derived from the in-depth question setting unit 1520 to the subject to be evaluated.
  • the evaluation of specific competencies is finally performed on the basis of the video answers performed by the evaluator to the in-depth questions.
  • the competency evaluation unit 1530 includes an in-depth questioning unit 1540 and a comprehensive evaluation information derivation unit 1550, and the in-depth questioning unit 1540 is one derived from the in-depth question setting unit 1520.
  • the second output information based on the answer image performed by the person to be evaluated according to the second question study 1541 and the second question study 1541 for providing the above in-depth questions to the person to be evaluated and at least one in-depth question provided by the second question study 1541 and a second output information derivation unit 1542 for deriving .
  • the first question study unit 1511 and the second question study unit 1541 are included in the question provision unit 1400 of the above-described server system 1000 to provide information on specific capabilities. Questions and in-depth questions may also be provided to the appraiseee.
  • the evaluator may be provided with one or more in-depth questions provided by the second question-and-question study 1541 through the subject terminal, and may generate an answer image to the one or more in-depth questions through the evaluator terminal. Thereafter, the evaluator terminal transmits an image answering the generated one or more in-depth questions to the server system 1000 .
  • the second output information derivation unit 1542 derives second output information by inputting an image of an answer performed by an evaluator to one or more in-depth questions received from the server system 1000 into the machine learning model.
  • the second output information includes the evaluation information for the specific competency based on the image of the answer performed by the evaluator to one or more in-depth questions through the machine learning model and the derived behavioral indicators related to the evaluation information can do.
  • the comprehensive evaluation information derivation unit 1550 performs a comprehensive evaluation based on the first output information derived from the first output information derivation unit 1512 and the second output information derived from the second output information derivation unit 1542 .
  • information is derived, and the comprehensive evaluation information may correspond to the above-described second subject competency information or comprehensive subject capacity information.
  • the competency information derivation unit 1500 in 'Method of providing automated evaluation of interview images using machine learning model' derives the subject's competency information based on the answer image performed by the evaluator, whereas in this configuration, A more reliable behavior-based interview-based evaluation can be performed because in-depth questions are derived according to the response image performed primarily by the evaluator, and the evaluation is performed by further considering the answer image performed by the evaluator for the in-depth question. have.
  • FIG. 11 schematically illustrates a method of deriving an in-depth question for automated evaluation of an interview image performed by the server system 1000 according to an embodiment of the present invention.
  • a plurality of behavioral indicators and a plurality of questions are preset for a specific capability in the server system 1000 . and each of the plurality of behavioral indicators is characterized in that it has a correlation with at least one of the plurality of questions, and the automated evaluation method answers at least one of the preset questions for performing the evaluation of the specific competency.
  • the first question providing step (S20) provided to the assessee and the response image performed by the assessee to one or more questions provided in the first question providing step (S20) are input into the machine learning model to input the specific competency of the assessee a general question step including a first output information deriving step (S21) of deriving first output information including evaluation information for and a derived behavioral indicator related to the evaluation information;
  • a general question step including a first output information deriving step (S21) of deriving first output information including evaluation information for and a derived behavioral indicator related to the evaluation information;
  • an in-depth question setting step (S22) of setting one or more in-depth questions based on the derived one or more derived behavior indicators; and a competency evaluation step of performing evaluation of the specific competency based on the image of the answer performed by the evaluator to the in-depth question and the first output information derived in the first output information derivation step (S21). have.
  • the server system 1000 provides one or more questions for performing an evaluation of a specific competency according to the request of the evaluator to the subject to be evaluated.
  • a first question-providing step (S20) is performed.
  • a plurality of behavior indicators and a plurality of questions related to each capability are preset in the server system 1000 for each capability, and each of the plurality of behavior indicators includes at least one of the plurality of questions and It is characterized by having a correlation.
  • one or more questions about the competency to be evaluated may be provided to the corresponding terminal to be evaluated, so that the person to be evaluated may generate an image answering the one or more questions.
  • the generated answer image performed by the evaluator may be transmitted to the server system 1000 and stored in the DB 1600 .
  • the first output information deriving step (S21) derives the first output information by inputting the image of the answer performed by the evaluated person to the machine learning model.
  • the first output information includes evaluation information for a specific competency to be evaluated and a derived behavioral index related to the evaluation information.
  • the evaluation information may include discovery probability information for each behavioral indicator related to the specific capability in the corresponding response image and text information about specific content of the response image related to each behavioral indicator.
  • the derived behavioral indicator is a behavioral indicator observed in the content of the answer image among a plurality of behavioral indicators related to a specific capability to be evaluated, and preferably, the discovery probability information for each behavioral indicator exceeds a predetermined value. It is also possible to derive a behavioral indicator that is used as a derived behavioral indicator.
  • the general question step including the step of providing the first question (S20) and the step of deriving the first output information (S21) may be repeatedly performed a plurality of times. For example, when there are a plurality of questions about a specific competency to be evaluated, the general question step may be repeatedly performed as many as the plurality of questions to derive first output information for each question.
  • the first question providing step (S20) provides a plurality of questions to the assessee at once, and the first output information derivation step (S21) is for an answer image to each question. In order to derive each of the first output information, it may be performed a plurality of times.
  • the deep question setting step (S22) derives one or more deep questions based on one or more derived behavior indicators included in the one or more first output information derived in the first output information derivation step (S21). More specifically, the in-depth question setting step (S22) includes one or more in-depth questions for eliciting an answer from the evaluator to a behavioral indicator that is not included in the one or more derived behavioral indicators among a plurality of behavioral indicators corresponding to a specific competency to be evaluated.
  • a question related to a behavioral indicator not included in the derived behavioral indicator is derived as an in-depth question, or a rule-based or machine learning In-depth questions can also be derived from the model.
  • the second question providing step (S23) and the second question providing step (S23) of providing one or more of the in-depth questions set in the deep question setting step (S22) to the assessee A method of deriving second output information including evaluation information on the specific competency of the evaluated person and a derived behavioral indicator related to the evaluation information by inputting the image of the answer performed by the evaluated person to the above in-depth question into the machine learning model 2 in-depth question step including the output information derivation step (S24); and a comprehensive evaluation information deriving step (S25) of deriving comprehensive evaluation information for the specific competency of the evaluated person based on the first output information and the second output information.
  • one or more in-depth questions derived in the in-depth question setting step (S22) are transmitted to the corresponding subject terminal and provided to the subject, and the subject is provided with the second question providing step (S23) ) can generate an answer image to one or more in-depth questions provided by the evaluator terminal.
  • the evaluator terminal transmits an image of an answer performed by the evaluator to the generated one or more in-depth questions to the server system 1000, and the server system 1000 may receive the answer image and store it in the DB 1600.
  • the second output information deriving step (S24) derives the second output information by inputting the image of the answer performed by the evaluator to the one or more in-depth questions into the machine learning model.
  • the machine learning model input in the second output information deriving step S24 may be the same as the machine learning model in the first output information deriving step S21 described above.
  • the configuration of the second output information derived in the step of deriving the second output information (S24) is the same as the configuration of the first output information derived in the step of deriving the first output information (S21), but the second output information is the second output information.
  • the comprehensive evaluation information derivation step (S25) derives comprehensive evaluation information on the specific competency of the evaluated person based on the first output information and the second output information. More specifically, the first output information is derived based on an image of an answer performed by the evaluator to one or more questions related to the specific competency that the evaluator wants to be evaluated in the first question providing step (S20), and the first output information includes information on the derived behavioral indicators for the specific competency that can be observed in the corresponding response image.
  • the second output information is one or more in-depth questions to derive an answer to a behavioral indicator that does not correspond to the derived behavioral indicator included in the first output information among a plurality of behavioral indicators for the specific competency.
  • each derived behavioral indicator included in the first output information and the second output information may include all of a plurality of behavioral indicators for the specific capability, and as a result, the first output information and the second output Based on the information, assessments of specific competencies can be performed.
  • the in-depth question setting step (S22) is performed again to derive an additional in-depth question about the behavioral indicator that does not correspond to the derived behavioral indicator, and similarly, the second question question providing step (S23) and the second output information derivation
  • step S24 it is possible to derive the output information for the response image performed by the evaluator with respect to the behavioral indicators that do not correspond to the derived behavioral indicators, and this iterative process includes one or more output information included in each output information. It can be repeated until the derived behavioral indicator includes all the multiple behavioral indicators for the specific competency.
  • the comprehensive evaluation information derivation step (S25) can be performed immediately without performing the steps related to the in-depth question, and in this case, the comprehensive evaluation information derivation step (S25) based on the first output information can also be derived.
  • the in-depth question setting step (S22) is based on a plurality of behavior indicators set for the specific competency and one or more derived behavior indicators derived through the general question step, the plurality of behaviors It is possible to determine one or more in-depth questions for determining a behavioral indicator not derived by the derived behavioral indicator among the indicators, and for eliciting an answer related to the behavioral indicator that is not derived by the derived behavioral indicator for the evaluated person.
  • the in-depth question setting step (S22) is a step of determining a behavioral indicator that is not included in one or more derived behavioral indicators included in the first output information among a plurality of behavioral indicators corresponding to the specific competency to be evaluated (S30) ) is included.
  • determining only the behavioral indicators not included in the first output information through the above step (S30) it corresponds to a question that can lead to an answer related to the behavioral indicators not included in the first output information in the step (S31) to be described later from the evaluator In-depth questions can be derived.
  • the in-depth question setting step (S22) further includes a step (S31) of setting one or more in-depth questions for behavior indicators not included in the derived behavior indicators.
  • step (S31) one or more in-depth questions related to the behavioral indicators are derived so that behavioral indicators not derived as derived behavioral indicators can be observed in the respondent's answer.
  • the one or more in-depth questions are one or more questions corresponding to a behavioral indicator not derived as a derived behavioral indicator among one or more questions corresponding to each behavioral indicator set in the server system 1000, or a specific question among the one or more questions can be derived as an in-depth question.
  • a deep question In the deep question setting step (S22) performed by the deep question setting unit 1520 shown in FIG. 12, predetermined steps are performed to derive behavioral indicators from the question pool stored for each behavioral indicator in the server system 1000 This corresponds to a method of deriving a specific question as a deep question from a question pool corresponding to a behavioral indicator that does not correspond to , but in another embodiment of the present invention, a deep question can be derived using a machine-learned deep question recommendation model.
  • FIG 13 schematically shows detailed steps of the deep question setting step implemented in another method according to an embodiment of the present invention.
  • the in-depth question setting step based on a plurality of behavior indicators set for the specific competency and one or more derived behavior indicators derived through the general question step, among the plurality of behavior indicators, the It is possible to determine one or more in-depth questions to determine a behavioral indicator that has been derived as a derived behavioral indicator, but does not meet the preset discrimination criteria, as an incomplete behavioral indicator, and allow the evaluator to derive an answer related to the incomplete behavioral indicator.
  • one or more in-depth questions can be set to determine a behavioral indicator that is not derived as a derived behavioral indicator, and to derive an answer to the corresponding behavioral indicator from the person being evaluated. .
  • the in-depth question setting step (S22) is, as shown in FIG. 13, derived as a derived behavioral indicator among a plurality of behavioral indicators set for a specific capability, but is not completely derived, that is, incomplete.
  • One or more in-depth questions may be set for determining the behavioral indicators and for eliciting answers to the incomplete behavioral indicators from the evaluated.
  • the in-depth question setting step (S22) is included in one or more derived behavioral indicators included in the first output information among a plurality of behavioral indicators corresponding to the specific competency to be evaluated, but does not meet the preset discrimination criteria
  • a behavioral indicator is determined (S40).
  • the preset discrimination criterion may correspond to a reference value for determining to what extent the behavioral indicator can be included in the derived behavioral indicator.
  • the specific behavioral indicator is completely derived as the derived behavioral indicator.
  • the specific behavioral indicator is determined as an incomplete derived behavioral indicator, that is, an incomplete behavioral indicator.
  • step S40 by discriminating a behavioral indicator that is included in the first output information through step S40 but does not correspond to a clear behavioral indicator as an incomplete behavioral indicator, an answer related to the incomplete behavioral indicator can be derived from the subject in step S41 to be described later. In-depth questions that correspond to the questions can be derived.
  • step S40 one or more in-depth questions related to the incomplete behavioral indicator are derived so that the incomplete behavioral indicator can be observed in the respondent's answer.
  • the one or more in-depth questions are one or more questions corresponding to incomplete behavior indicators among one or more questions corresponding to each behavioral indicator set in the server system 1000, or a specific question among the one or more questions to be derived as an in-depth question.
  • a deep question In the deep question setting step (S22) performed by the deep question setting unit 1520 shown in FIG. 13, predetermined steps are performed and incomplete behavioral indicators are stored in the question pool stored for each behavioral indicator in the server system 1000.
  • the in-depth question setting step (S22) is a method of setting one or more in-depth questions for a behavioral indicator that is not derived by the derived behavioral indicator described in FIG. 12 and one or more in-depth questions about the incomplete behavioral indicator described in FIG. You can use only one of the setting methods.
  • the deep question setting step (S22) one or more in-depth questions for each of the behavioral indicators and incomplete behavior indicators that are not derived as the derived behavior indicators by using both methods may be set.
  • FIG. 14 schematically illustrates a process of deriving output information by a machine learning model in the capability information derivation unit 1500 according to an embodiment of the present invention.
  • the capability information derivation unit 1500 derives output information by inputting an image of the subject's answer to the machine learning model, and specifically, the first output included in the capability information derivation unit 1500 .
  • the information derivation unit 1512 may derive the first output information by inputting an image of an answer performed by the evaluated person to one or more questions provided by the first question-and-question study 1511 into the machine learning model, and the competency information
  • the second output information derivation unit 1542 included in the derivation unit 1500 inputs the image of the answer performed by the evaluator to the one or more in-depth questions provided by the second question study unit 1541 into the machine learning model, and 2 Output information can be derived.
  • the machine learning model may include various detailed machine learning models that perform evaluation on the answer image performed by the evaluator.
  • the detailed machine learning model corresponds to a detailed machine learning model that can be learned and evaluated based on deep learning, or derives feature information about the corresponding answer image according to a preset routine or algorithm rather than learning, It may correspond to a detailed machine learning model that evaluates feature information.
  • the capability information derivation unit 1500 basically receives an answer image performed by the subject including a plurality of continuous image (image) information and voice information, and machine learning technology such as deep learning The output information is derived through the machine learning model learned through
  • the capability information derivation unit 1500 may additionally analyze the answer image based on a preset rule rather than machine learning, and may derive specific evaluation values.
  • the capability information derivation unit 1500 extracts video and audio information from an answer image including a plurality of consecutive images (images) and audio and inputs them to each detailed machine learning model to derive a result value or It is also possible to synthesize the voice information and input it into the detailed machine learning model to derive the result value.
  • the capability information derivation unit 1500 includes the machine learning model, and derives output information based on the feature information derived from the answer image, or the capability information derivation unit 1500 is a separately provided machine It is also possible to derive output information based on the feature information derived from the answer image by calling the learning model.
  • the first output information derived in the first output information derivation step (S21) is input to the machine learning-based deep question recommendation model as the subject to be evaluated It is possible to derive one or more in-depth questions to elicit answers related to behavioral indicators that are not derived from the derived behavioral indicators.
  • the deep question setting unit 1520 derives an in-depth question by performing predetermined steps as in FIG. 12 described above, or the first output information in the deep question recommendation model as shown in FIG. 15 . It is also possible to derive an in-depth question by entering
  • the in-depth question recommendation model includes in-depth question information included in the above-described first evaluator competency information, and more specifically, the in-depth question information corresponds to an in-depth question input by the evaluator on the deep question layer included in the evaluation interface, and the The deep question recommendation model can perform learning based on deep question information.
  • the deep question recommendation model may learn only the deep question information, but preferably by additionally learning the behavioral indicators related to the deep questions input by the evaluator on the evaluation interface, the unobserved behavioral indicators and the corresponding depth You can also learn to relate to questions.
  • the deep question recommendation model may include various detailed machine learning models for deriving deep questions based on the answer image of the evaluated, and the detailed machine learning model is learned based on deep learning to derive deep questions. It may correspond to a detailed machine learning model, or it may correspond to a detailed machine learning model in which feature information is derived according to a preset routine or algorithm rather than learning, and a deep question is derived based on the derived feature information.
  • the in-depth question setting unit 1520 includes the in-depth question recommendation model, and derives one or more in-depth questions based on the first output information, or the deep question setting unit 1520 is provided separately One or more in-depth questions may be derived based on the first output information by calling the question recommendation model.
  • the deep question recommendation model shown in FIG. 15 is shown as a separate model distinct from the machine learning model shown in FIG. 14, but in another embodiment of the present invention, the deep question recommendation model is included in the machine learning model and may derive one or more in-depth questions by receiving the first output information derived by the detailed machine learning model included in the machine learning model.
  • 16 schematically shows a configuration for deriving output information from a machine learning model by inputting an image of an answer performed by an evaluator to a machine learning model according to an embodiment of the present invention.
  • the first output information derivation step (S21) and the second output information derivation step (S24) performed by the capability information derivation unit 1500 receive the answer image performed by the evaluator as input and select a predetermined The step may be performed to process the answer image, and the processed answer image may be input to the machine learning model to derive output information.
  • the drawings shown in (A), (B) and (C) of FIG. 16 correspond to various embodiments of the configuration of input elements input to the machine learning model by the capability information derivation unit 1500 .
  • the first output information deriving step (S21) and the second output information deriving step (S24) are performed in the answer image performed by the evaluator.
  • Image information and audio information may be separated, and each of the separated image information and audio information may be pre-processed and input to the machine learning model.
  • the first output information derivation step (S21) is to receive an image of an answer performed by the assessee to one or more questions provided to the assessee in the first question and question study 1511, and obtain image information and audio information from the answer image.
  • the second output information derivation step (S24) the answer image performed by the assessee to one or more in-depth questions provided to the assessee in the second question study 1541 is input, and image information and Separate voice information.
  • the capability information derivation unit 1500 includes an image and sound separation module, and the image and sound separation module receives input in the first output information derivation step (S21) and the second output information derivation step (S24).
  • the answer images are divided into video information and audio information.
  • the capability information derivation unit 1500 further includes a pre-processing module, and the pre-processing module pre-processes each of the image information and the audio information.
  • the image information and the audio information are converted into a form suitable for the algorithm of the machine learning model through the pre-processing module, and the performance of the machine learning model can be improved.
  • the preprocessing module processes missing values or features through the data cleaning step for image information and audio information, and encodes them into numeric data through one hot encoding method through the Handling Text and Categorical Attributes step, and Custom Transformers Transform data through the steps, set the range of data through the Feature Scaling step, and automate this process through the Transformation Pipelines step.
  • the steps performed in the preprocessing module are not limited to the steps described above, It can include various preprocessing steps for the machine learning model.
  • the capability information derivation unit 1500 further includes an STT module, and the STT module Speech for the answer image received in the first output information derivation step (S21) and the second output information derivation step (S24)
  • STT Speech to Text
  • the Speech to Text conversion method performed by the STT module may use various existing STT conversion methods.
  • text information may not be derived only by the STT conversion method through the above-described STT module, and the text for the answer image is directly input by the manager of the server system 1000 for the answer image, or the STT module
  • the final text information may be derived by first deriving text information on the answer image through , and correcting the text information by the manager of the server system 1000 or the like.
  • the STT module may receive the audio information of the answer image separated through the video-to-sound separation module, perform STT conversion, and convert the corresponding audio information into text information.
  • the step of deriving the first output information (S21) and the step of deriving the second output information (S24) include performing embedding expressing the derived text information as a vector.
  • the capability information derivation unit 1500 may further include an embedding module, and the embedding module may perform embedding on text information derived based on the answer image.
  • the capability information derivation unit 1500 is a question about the text information derived based on the answer image and the answer image performed by the evaluator.
  • a step of performing embedding expressing the text information of the vector may be performed, and the embedded vector for the corresponding question may correspond to an additional configuration input to the machine learning model. Therefore, the machine learning model can derive more sophisticated output information by considering not only the answer image but also the question about the answer image.
  • the embedding module may express each text information in a vector form using various embedding methods such as One-hot encoding, CountVectorizer, TfidVectorizer, and Word2Vec.
  • the vectors embedded in this way are input to the machine learning model, and the machine learning model is the preprocessed image information and audio information described above. and embedded vectors can be input and output information about the answer image performed by the evaluated can be derived.
  • the capability information derivation unit 1500 is pre-processed image information in the machine learning model, pre-processed audio information, the machine learning model By inputting text information derived based on the answer image performed by the subject to be input and the competency identifier for the corresponding answer image, output information for the corresponding answer image can be derived.
  • the machine learning model shown in (C) of FIG. 16 may correspond to a machine learning-based model capable of performing evaluation of a plurality of competencies rather than performing evaluation of a specific competency.
  • the machine learning model By inputting the competency identifier to the learning model, it is possible to evaluate a specific competency corresponding to the competency identifier. That is, the machine learning model can evaluate each of a plurality of competencies, and a competency identifier capable of identifying a specific competency to be evaluated through the answer image performed by the evaluated person and the corresponding answer image in the machine learning model Output information can be derived by input.
  • Output information can also be derived by inputting a vector embedded with respect to text information derived based on , a vector embedded with respect to text information of a question for the corresponding answer image, and a competency identifier corresponding to the corresponding answer image into the machine learning model.
  • FIG. 17 schematically shows a configuration for setting an in-depth question according to output information derived by inputting an image of an answer performed by a subject to be evaluated into a machine learning model, and deriving comprehensive evaluation information according to an embodiment of the present invention.
  • the first question question study 1511 provides one or more questions related to a specific competency to the evaluated, and accordingly, the first output information derivation unit 1512 provides the The first output information may be derived by processing the image of an answer performed by the evaluator for one or more questions, and inputting the processed image of the answer to the machine learning model.
  • the one or more questions provided by the first question study 1511 may consist of individual questions that do not consider the relationship between the questions in an independent form, but preferably, the one or more questions are mutually A correlation may exist.
  • the first question corresponds to a question asking about the 'situation' of the evaluator's past experiences related to a particular competency
  • the second question is related to the first question in what 'in the context of the first question'.
  • the third question corresponds to a question asking whether an action was taken, and the third question corresponds to a question asking the 'result' of the action in the second question in connection with the first and second questions, so that each question is interconnected. can be configured.
  • the first question-question study unit 1511 provides each question individually, and the first output information derivation unit 1512 provides the first output information for each answer image performed by the subject for each individually provided question.
  • the output information was derived, in another embodiment of the present invention, the first question-and-question study 1511 provides one or more questions to the evaluator at a time, and the first output information derivation unit 1512 responds to one or more questions.
  • the first output information may be derived by dividing the answer image performed by , for each question and inputting it to the machine learning model, or inputting the entire answer image to the machine learning model.
  • the first output information derivation unit 1512 derives the first output information including the behavior index for a specific capability observed in the answer image as the derived behavior index, and the in-depth question setting unit 1520 By performing the in-depth question setting step (S22), one or more in-depth questions are derived based on the first output information.
  • the behavioral indicators related to specific capabilities correspond to behavioral indicators 1 to 5
  • the derived behavioral indicators included in the first output information derived based on the answer image performed by the evaluator to the first question are behavioral indicators 1
  • the derived behavioral indicators included in the first output information derived based on the response image performed by the evaluator to the second question include behavioral indicators 3 and 4, and the answer image performed by the evaluator to the third question
  • the derived behavioral indicator included in the first output information derived based on includes behavioral indicator 4.
  • the in-depth question setting unit 1520 derives an in-depth question related to a behavioral indicator that does not correspond to the derived behavioral indicator included in each of the first output information.
  • each of the first output information does not include the derived behavior indicators corresponding to the behavior indicators 2 and 5, and thus the in-depth question setting unit 1520 provides answers related to the behavior indicators 2 and 5 to the subject to be evaluated.
  • the deep question setting unit 1520 may derive one or more of the preset questions related to the behavioral indicators that do not correspond to the derived behavioral indicators as in-depth questions, or derive a separate deep question through the machine-learned deep question recommendation model. have.
  • the second question study unit 1541 provides the one or more in-depth questions to the evaluated, and accordingly, the second output information derivation unit 1542 As shown in FIG. 16 , may process the image of the answer performed by the evaluator to the one or more in-depth questions, and input the processed image of the answer to the machine learning model to derive second output information.
  • the second output information derivation unit 1542 may derive the second output information including the behavioral indicator for a specific capability observed in the answer image as the derived behavioral indicator.
  • the first output information derived in the first output information derivation step (S21) and the second output information derived in the second output information derivation step (S24) are found for the derived behavioral indicators related to the evaluation information
  • the text information of the answer image performed by the subject corresponding to the probability information and the discovery probability information may be further included.
  • the first output information and the second output information derived through the machine learning model determine whether the answer image contains the relevant answer content for each of one or more derived behavioral indicators corresponding to the answer image performed by the evaluator.
  • the discovery probability information may be included, and the discovery probability information may also be included in the comprehensive evaluation information derived in the general evaluation information derivation step (S25).
  • the evaluator selects a specific area of the script on the script layer, selects a specific behavioral indicator corresponding to the selected specific region on the behavioral indicator list region, and selects a specific behavioral indicator from the response image performed by the first evaluator As in selecting specific answer contents corresponding to can be calculated probabilistically.
  • first output information derivation unit 1512 and the second output information derivation unit 1542 are text information derived based on the response image in the competency information derivation unit 1500 with respect to the response image performed by the evaluated person.
  • specific text information corresponding to the discovery probability information for each of the one or more derived behavior indicators calculated in the machine learning model may be further included in the first output information and the second output information, respectively.
  • the first output information derivation unit 1512 and the second output information derivation unit 1542 are calculated from the machine learning model among text information derived based on the answer image performed by the evaluated person.
  • Specific text information corresponding to the derived behavioral indicator in which the discovery probability information for each of the one or more derived behavioral indicators exceeds a predetermined value may be derived by further including the first output information and the second output information, respectively.
  • specific text information corresponding to the discovery probability information for each of one or more derived behavioral indicators may be derived through the machine learning model.
  • the comprehensive evaluation information derived in the comprehensive evaluation information deriving step (S25) is for each of the derived behavior indicators derived in the first output information deriving step (S21) and the second output information deriving step (S24).
  • the score for the specific competency calculated by synthesizing the discovery probability information may be included.
  • the comprehensive evaluation information derivation step (S25) performed by the comprehensive evaluation information derivation unit 1550 is comprehensive evaluation information that finally evaluates the specific competency of the person to be evaluated based on the first output information and the second output information to derive
  • the comprehensive evaluation information derivation unit 1550 calculates a score for the answer image performed by the evaluator, such as the evaluation score input by the evaluator on the score evaluation layer L3 included in the above-described evaluation interface, and the score is It may be included in the comprehensive evaluation information.
  • the score calculated by the comprehensive evaluation information derivation unit 1550 is calculated as a score among a plurality of scores set at specific intervals within a preset range, such as the evaluation score input by the evaluator on the score evaluation layer L3. can do.
  • the comprehensive evaluation information includes a score obtained by grading the evaluation of a specific competency, it is possible to exhibit the effect of quantifying and providing the degree of retention of the specific competency to the evaluated person.
  • the comprehensive evaluation information derived in the comprehensive evaluation information deriving step (S25) is input in the first output information deriving step (S21) and the second output information deriving step (S24), respectively. It may include the score for the specific competency calculated by synthesizing the pre-processing result information for the answer image of
  • the comprehensive evaluation information derived in the comprehensive evaluation information deriving step (S25) provides at least one answer image and second output information performed by the subject to be evaluated, which is input to the machine learning model in the first output information deriving step (S21).
  • step S24 one or more answer images performed by the evaluator to the in-depth question input to the machine learning model may be inputted into a separate machine learning model, and a score for the specific competency of the subject to be derived may be included.
  • answer images input to the separate machine learning model may be pre-processed through a predetermined step for pre-processing, and answer images may be input to the separate machine learning model.
  • a machine learning model for deriving the first output information and the second output information and a separate machine learning model for deriving the comprehensive evaluation information may be included in a single machine learning model, and the first output information and the second output information Each answer image is input to the single machine learning model in order to derive Comprehensive evaluation information may be derived based on each answer image or the first output information and the second output information derived from the machine learning model.
  • FIG. 18 schematically illustrates a configuration for deriving comprehensive evaluation information by further including feature information derived from a machine learning model to which an image of an answer performed by an evaluated person is input according to an embodiment of the present invention.
  • the comprehensive evaluation information derived in the comprehensive evaluation information derivation step (S25) includes discovery probability information and text information for the derived behavior indicators included in the first output information and the second output information.
  • the basic score information for the corresponding answer image and the score for the specific competency derived based on one or more information among the characteristic information generated to derive the first output information and the second output information from the machine learning model may include
  • FIG. 18 corresponds to a diagram illustrating a process of deriving comprehensive evaluation information based on the answer image performed by the evaluated.
  • an image of an answer to each of a plurality of questions provided to the evaluator is input to the machine learning model, and the machine learning model derives output information corresponding to the capability derivation result for each answer image.
  • the output information may include discovery probability information on the derived behavioral indicator corresponding to the corresponding answer image, text information on the derived behavioral indicator, and basic score information.
  • the basic score information may correspond to a score corresponding to the single answer image differently from the score for a specific competency included in the comprehensive evaluation information.
  • the process of deriving the output information may be performed in the above-described first output information deriving step (S12) and second output information deriving step (S24).
  • the plurality of questions provided to the evaluator include in-depth questions derived from the server system, as described with reference to FIG. 17 .
  • the machine learning model that received the answer image primarily derives feature information about the received answer image in order to derive output information corresponding to the capability deduction result, and derives the output information based on the feature information . Deriving feature information from the machine learning model will be described later with reference to FIG. 19 .
  • comprehensive evaluation information can be derived based on each characteristic information derived from the machine learning model with respect to one or more output information derived and each answer image, and the comprehensive evaluation
  • the information may include scores for specific competencies to be assessed.
  • the output information derived in the first output information deriving step (S21) and the second output information deriving step (S24) and the first Input the feature information derived from the machine learning model used in the output information derivation step (S21) and the second output information derivation step (S24), and the separate machine learning model includes a score for the specific competency of the subject to be evaluated to derive comprehensive evaluation information.
  • the separate machine learning model in the comprehensive evaluation information deriving step (S25) is different from the machine learning model in the above-described first output information deriving step (S21) and second output information deriving step (S24).
  • first output information derivation step (S21), the second output information derivation step (S24) and comprehensive evaluation information derivation In step S25 output information and comprehensive evaluation information may be derived through the entire machine learning model.
  • the separate machine learning model may derive comprehensive evaluation information by performing machine learning of a deep learning method, or may derive comprehensive evaluation information by performing machine learning of an ensemble learning method.
  • output information and feature information are input to the separate machine learning model, or discovery probability information, text information, and basic score information and characteristics of the output information included in the output information.
  • the comprehensive evaluation information may be derived by inputting one or more pieces of information among the information.
  • the comprehensive evaluation information deriving step (S25) in order to derive the comprehensive evaluation information in the comprehensive evaluation information deriving step (S25), by using the feature information derived from the machine learning model as an input element in a separate machine learning model, more accurate It can exert the effect of deriving the results of competency evaluation.
  • FIG. 19 schematically shows the internal configuration of a feature extraction model according to an embodiment of the present invention.
  • the above-described machine learning model may include a feature extraction model and a feature inference model, and the feature extraction model according to the embodiment shown in FIG. a first deep neural network for extracting spatial feature information for deriving image feature information of ; a second deep neural network for extracting spatial feature information for deriving a plurality of voice feature information from the voice information of the answer image performed by the evaluator; a first cyclic neural network module for receiving the plurality of image feature information and deriving first feature information; and a second recurrent neural network module for receiving the plurality of voice feature information and deriving second feature information; a third cyclic neural network module for deriving third characteristic information by converting the voice information of the response image to Speech to Text (STT) or receiving a script input based on the response image from an administrator of the server system 1000, etc.; may include
  • the first deep neural network and the second deep neural network may correspond to a CNN module and the like, and in the embodiment shown in FIG. 19, the first deep neural network corresponds to the first CNN module, and the second deep The neural network may correspond to the second CNN module.
  • the first recurrent neural network module, the second recurrent neural network module, and the third recurrent neural network module may correspond to the LSTM module included in the RNN module, and in the embodiment shown in FIG. 19, the first recurrent neural network module is 1 LSTM module, the second recurrent neural network module may correspond to the second LSTM module, and the third recurrent neural network module may correspond to the third LSTM module.
  • the plurality of frames may be generated by dividing an image of an image at preset time intervals.
  • the plurality of image feature information derived by the first CNN module is preferably input to the first LSTM module in chronological order.
  • the characteristic information (pitch, intensity, etc.) of the voice for a preset time section, or the data of the voice itself, is input to the second CNN module, and the voice feature information derived from the second CNN module is sent to the second LSTM module in time series order. input is preferred.
  • the feature information for the voice may correspond to pitch or intensity, but more preferably, the voice is divided into certain sections, and the spectrum for each section is applied to Mel Filter Bank to extract features through Cepstral analysis.
  • Mel-Frequency Cepstral Coefficient MFCC may be applicable.
  • the script input to the feature extraction model may correspond to a vector in which the corresponding script is embedded in token units.
  • feature information (a vector sequence) corresponding to the output of the feature extraction model is derived based on the first detailed feature information, the second detailed feature information, and the third detailed feature information.
  • the characteristic information may be derived by simply combining the first detailed characteristic information, the second detailed characteristic information, and the third detailed characteristic information, or the first detailed characteristic information and the second detailed characteristic information
  • the characteristic information may be derived by applying a weight or the like to the information and the third detailed characteristic information.
  • the feature inference model applies a weight learned by a plurality of Fully Connected Layers to the feature information derived from the feature extraction model to derive an intermediate result (Representative Vector), and the second The result value for the response image performed by the evaluated is derived.
  • the above-described machine learning model may analyze the response image performed by the evaluated person to derive information on the degree of possessing the competency of the evaluated person for a specific competency corresponding to the corresponding response image.
  • the number of the fully connected layers is not limited to the number shown in FIG. 20, and the feature inference model may include one or more fully connected layers.
  • the intermediate result may be omitted.
  • the feature inference model may be implemented in such a way that it uses a Softmax activation function to handle the problem of classifying according to a preset criterion, or derives a score using a sigmoid activation function, etc. .
  • FIG. 21 schematically illustrates an internal configuration of a computing device according to an embodiment of the present invention.
  • the above-described server system 1000 illustrated in FIG. 1 may include components of the computing device illustrated in FIG. 21 .
  • the computing device 11000 includes at least one processor 11100, a memory 11200, a peripheral interface 11300, an input/output subsystem ( I/O subsystem) 11400 , a power circuit 11500 and a communication circuit 11600 may be included at least.
  • the computing device 11000 may correspond to the server system 1000 illustrated in FIG. 1 or one or more servers included in the server system 1000 .
  • the memory 11200 may include, for example, high-speed random access memory, magnetic disk, SRAM, DRAM, ROM, flash memory, or non-volatile memory. .
  • the memory 11200 may include a software module, an instruction set, or other various data required for the operation of the computing device 11000 .
  • access to the memory 11200 from other components such as the processor 11100 or the peripheral device interface 11300 may be controlled by the processor 11100 .
  • Peripheral interface 11300 may couple input and/or output peripherals of computing device 11000 to processor 11100 and memory 11200 .
  • the processor 11100 may execute a software module or an instruction set stored in the memory 11200 to perform various functions for the computing device 11000 and process data.
  • the input/output subsystem may couple various input/output peripherals to the peripheral interface 11300 .
  • the input/output subsystem may include a controller for coupling a peripheral device such as a monitor or keyboard, mouse, printer, or a touch screen or sensor as required to the peripheral interface 11300 .
  • input/output peripherals may be coupled to peripheral interface 11300 without going through an input/output subsystem.
  • the power circuit 11500 may supply power to all or some of the components of the terminal.
  • the power circuit 11500 may include a power management system, one or more power sources such as batteries or alternating current (AC), a charging system, a power failure detection circuit, a power converter or inverter, a power status indicator, or a power source. It may include any other components for creation, management, and distribution.
  • the communication circuit 11600 may enable communication with another computing device using at least one external port.
  • the communication circuit 11600 may transmit and receive an RF signal, also known as an electromagnetic signal, including an RF circuit, thereby enabling communication with other computing devices.
  • an RF signal also known as an electromagnetic signal, including an RF circuit
  • FIG. 21 is only an example of the computing device 11000, and the computing device 11000 may omit some components shown in FIG. 21, or further include additional components not shown in FIG. 21, or 2 It may have a configuration or arrangement that combines two or more components.
  • a computing device for a communication terminal in a mobile environment may further include a touch screen or a sensor in addition to the components shown in FIG. 21 , and various communication methods (WiFi, 3G, LTE) are provided in the communication circuit 11600 .
  • WiFi, 3G, LTE wireless fidelity
  • 3G, LTE wireless local area network
  • Bluetooth Wireless Fidelity
  • NFC wireless Fidelity
  • Zigbee Zigbee
  • Components that may be included in the computing device 11000 may be implemented as hardware, software, or a combination of both hardware and software including an integrated circuit specialized for one or more signal processing or applications.
  • Methods according to an embodiment of the present invention may be implemented in the form of program instructions that can be executed through various computing devices and recorded in a computer-readable medium.
  • the program according to the present embodiment may be configured as a PC-based program or an application dedicated to a mobile terminal.
  • the application to which the present invention is applied may be installed in the user terminal or the affiliated store terminal through the file provided by the file distribution system.
  • the file distribution system may include a file transmission unit (not shown) that transmits the file in response to a request from a user terminal or an affiliated store terminal.
  • the device described above may be implemented as a hardware component, a software component, and/or a combination of the hardware component and the software component.
  • devices and components described in the embodiments may include, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA). , a programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions, may be implemented using one or more general purpose or special purpose computers.
  • the processing device may execute an operating system (OS) and one or more software applications executed on the operating system.
  • a processing device may also access, store, manipulate, process, and generate data in response to execution of the software.
  • OS operating system
  • a processing device may also access, store, manipulate, process, and generate data in response to execution of the software.
  • the processing device includes a plurality of processing elements and/or a plurality of types of processing elements. It can be seen that can include For example, the processing device may include a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as parallel processors.
  • Software may comprise a computer program, code, instructions, or a combination of one or more thereof, which configures a processing device to operate as desired or is independently or collectively processed You can command the device.
  • the software and/or data may be any kind of machine, component, physical device, virtual equipment, computer storage medium or device, to be interpreted by or to provide instructions or data to the processing device. , or may be permanently or temporarily embody in a transmitted signal wave.
  • the software may be distributed over networked computing devices, and may be stored or executed in a distributed manner. Software and data may be stored in one or more computer-readable recording media.
  • the method according to the embodiment may be implemented in the form of program instructions that can be executed through various computer means and recorded in a computer-readable medium.
  • the computer-readable medium may include program instructions, data files, data structures, etc. alone or in combination.
  • the program instructions recorded on the medium may be specially designed and configured for the embodiment, or may be known and available to those skilled in the art of computer software.
  • Examples of the computer-readable recording medium include magnetic media such as hard disks, floppy disks and magnetic tapes, optical media such as CD-ROMs and DVDs, and magnetic such as floppy disks.
  • - includes magneto-optical media, and hardware devices specially configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
  • Examples of program instructions include not only machine language codes such as those generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like.
  • the hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.
  • the evaluation result is derived based on the image of the respondent's answer through the machine learning model for performing the evaluation of a specific competency, the time and cost required for the evaluation are reduced and the objective evaluation is at the same time It can have an effect that can lead to results.
  • the evaluation interface provided to the evaluator in the step of providing the evaluation interface includes a script layer, and the script layer displays a script according to the evaluator's answer image, so that the evaluator easily recognizes the evaluator's answer can be effective.
  • the behavior index list area for the corresponding question or specific competency is displayed. It is possible to exert the effect that the evaluator can easily select.
  • the evaluation interface includes a behavior indicator layer in which a specific region of a script selected by the evaluator from the script layer and a specific behavior indicator list of a behavior indicator list region selected from the behavior indicator list region is displayed, so that the evaluator It is possible to exert the effect of easily grasping the respondent's answer for each behavioral indicator.
  • the evaluation interface includes an in-depth question layer to which the evaluator receives an in-depth question according to the answer image of the evaluator and a singularity layer to receive special items about the answer image of the evaluator, so that the evaluator In the case of receiving education on the evaluation method, it can exert the effect of comparing in-depth questions and peculiarities written by experts on the evaluation method.
  • the evaluation result is derived by separating the image information and the audio information from the answer image of the evaluated person, and inputting the respective image information and the audio information into the machine learning model, the context in the answer image of the evaluated person And it is possible to exert the effect of deriving an accurate evaluation result by grasping the intent of the answer in detail.
  • the second evaluator capability information derived in the capability information derivation step through the machine learning model includes the discovery probability information for each behavioral indicator, the effect of objectively providing the evaluation result is obtained.
  • the second assessee capability information derived from the capability information derivation step through the machine learning model further includes text information in the respondent's answer image corresponding to the discovery probability information for each behavioral indicator. In other words, it can have the effect of providing concretely the respondent's answer corresponding to the behavioral indicator.
  • an in-depth question is set based on the derived behavioral indicator included in the first output information derived in the first output information derivation step and a plurality of behavioral indicators for specific capabilities, it is not observed without an evaluator It can have the effect of providing in-depth questions that can lead to answers to behavioral indicators that are not yet evaluated.
  • the comprehensive evaluation information derived in the comprehensive evaluation information derivation step includes a score for a specific competency calculated by synthesizing the discovery probability information in the first output information and the second output information, intuitively It can exert the effect of recognizing the evaluation result of the person being evaluated.
  • An embodiment of the present invention may also be implemented in the form of a recording medium including instructions executable by a computer, such as a program module to be executed by a computer.
  • Computer-readable media can be any available media that can be accessed by a computer and includes both volatile and nonvolatile media, removable and non-removable media.
  • Computer-readable media may include both computer storage media and communication media.
  • Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Communication media typically includes computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave, or other transport mechanism, and includes any information delivery media.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Theoretical Computer Science (AREA)
  • Economics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Data Mining & Analysis (AREA)
  • Game Theory and Decision Science (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

La présente invention se rapporte à un procédé, à un système et à un support lisible par ordinateur permettant de déduire des questions approfondies destinées à une évaluation automatisée d'une vidéo d'entretien à l'aide d'un modèle d'apprentissage automatique, et plus particulièrement, à un procédé, à un système, et à un support lisible par ordinateur permettant de : présenter des questions se rapportant à une compétence spécifique à évaluer à un candidat, recevoir une vidéo de réponse effectuée par le candidat, et dériver des premières informations de sortie en fonction de la vidéo de réponse correspondante ; en fonction d'une pluralité d'indicateurs comportementaux se rapportant à la compétence spécifique et d'un ou plusieurs indicateurs comportementaux dérivés inclus dans les premières informations de sortie, déduire des questions approfondies pouvant provoquer dans les réponses de candidats des indicateurs comportementaux incomplets et des indicateurs comportementaux qui ne sont pas inclus dans les indicateurs comportementaux dérivés ; et recevoir une vidéo sur des réponses aux questions approfondies en provenance du candidat correspondant et enfin effectuer une évaluation de la compétence spécifique correspondante.
PCT/KR2021/008644 2020-07-10 2021-07-07 Procédé, système et support lisible par ordinateur permettant la déduction de questions approfondies destinées à une évaluation automatisée de vidéo d'entretien à l'aide d'un modèle d'apprentissage automatique WO2022010255A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2020-0085062 2020-07-10
KR1020200085062A KR102475524B1 (ko) 2020-07-10 2020-07-10 기계학습모델을 이용하여 면접영상에 대한 자동화된 평가를 위한 심층질문을 도출하는 방법, 시스템 및 컴퓨터-판독가능 매체

Publications (1)

Publication Number Publication Date
WO2022010255A1 true WO2022010255A1 (fr) 2022-01-13

Family

ID=79553328

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/008644 WO2022010255A1 (fr) 2020-07-10 2021-07-07 Procédé, système et support lisible par ordinateur permettant la déduction de questions approfondies destinées à une évaluation automatisée de vidéo d'entretien à l'aide d'un modèle d'apprentissage automatique

Country Status (2)

Country Link
KR (1) KR102475524B1 (fr)
WO (1) WO2022010255A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102630804B1 (ko) * 2022-01-24 2024-01-29 주식회사 허니엠앤비 감정 분석 방법 및 감정 분석 장치
KR102449661B1 (ko) * 2022-06-27 2022-10-04 주식회사 레몬베이스 인공지능 기반 채용 서비스 제공 방법, 장치 및 시스템
CN117557426B (zh) * 2023-12-08 2024-05-07 广州市小马知学技术有限公司 基于智能题库的作业数据反馈方法及学习评估系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004309631A (ja) * 2003-04-03 2004-11-04 Nippon Telegr & Teleph Corp <Ntt> 対話練習支援装置、方法及びプログラム
KR20130055833A (ko) * 2011-11-21 2013-05-29 배창수 단말기를 이용한 구인구직 면접 중개시스템
JP2017219989A (ja) * 2016-06-07 2017-12-14 株式会社採用と育成研究社 オンライン面接評価装置、方法およびプログラム
KR20190118140A (ko) * 2018-04-09 2019-10-17 주식회사 마이다스아이티 온라인 인재 분석을 통한 면접 자동화 시스템
KR20190140805A (ko) * 2018-05-29 2019-12-20 주식회사 제네시스랩 기계학습에 기초한 비언어적 평가 방법, 시스템 및 컴퓨터-판독가능 매체

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004309631A (ja) * 2003-04-03 2004-11-04 Nippon Telegr & Teleph Corp <Ntt> 対話練習支援装置、方法及びプログラム
KR20130055833A (ko) * 2011-11-21 2013-05-29 배창수 단말기를 이용한 구인구직 면접 중개시스템
JP2017219989A (ja) * 2016-06-07 2017-12-14 株式会社採用と育成研究社 オンライン面接評価装置、方法およびプログラム
KR20190118140A (ko) * 2018-04-09 2019-10-17 주식회사 마이다스아이티 온라인 인재 분석을 통한 면접 자동화 시스템
KR20190140805A (ko) * 2018-05-29 2019-12-20 주식회사 제네시스랩 기계학습에 기초한 비언어적 평가 방법, 시스템 및 컴퓨터-판독가능 매체

Also Published As

Publication number Publication date
KR20220007193A (ko) 2022-01-18
KR102475524B1 (ko) 2022-12-08

Similar Documents

Publication Publication Date Title
WO2022010255A1 (fr) Procédé, système et support lisible par ordinateur permettant la déduction de questions approfondies destinées à une évaluation automatisée de vidéo d&#39;entretien à l&#39;aide d&#39;un modèle d&#39;apprentissage automatique
WO2020190112A1 (fr) Procédé, appareil, dispositif et support permettant de générer des informations de sous-titrage de données multimédias
WO2020138624A1 (fr) Appareil de suppression de bruit et son procédé
WO2020197241A1 (fr) Dispositif et procédé de compression de modèle d&#39;apprentissage automatique
WO2020145571A2 (fr) Procédé et système de gestion d&#39;un modèle d&#39;évaluation automatique pour une vidéo d&#39;entrevue et support lisible par ordinateur
WO2018143707A1 (fr) Système d&#39;evaluation de maquillage et son procédé de fonctionnement
WO2019225961A1 (fr) Dispositif électronique permettant de générer une réponse à une entrée vocale à l&#39;aide d&#39;une application, et procédé de fonctionnement associé
WO2022065811A1 (fr) Procédé de traduction multimodale, appareil, dispositif électronique et support de stockage lisible par ordinateur
WO2019135621A1 (fr) Dispositif de lecture vidéo et son procédé de commande
WO2020209693A1 (fr) Dispositif électronique pour mise à jour d&#39;un modèle d&#39;intelligence artificielle, serveur, et procédé de fonctionnement associé
WO2021006404A1 (fr) Serveur d&#39;intelligence artificielle
WO2020036297A1 (fr) Appareil électronique et procédé de commande associé
WO2020213758A1 (fr) Dispositif d&#39;intelligence artificielle à interactivité locutoire et procédé associé
WO2020017827A1 (fr) Dispositif électronique et procédé de commande pour dispositif électronique
WO2021215804A1 (fr) Dispositif et procédé de fourniture de simulation de public interactive
WO2020184753A1 (fr) Appareil d&#39;intelligence artificielle pour effectuer une commande vocale à l&#39;aide d&#39;un filtre d&#39;extraction de voix, et procédé associé
WO2022010240A1 (fr) Système de gestion du cuir chevelu et des cheveux
WO2020117006A1 (fr) Système de reconnaissance faciale basée sur l&#39;ai
WO2022154457A1 (fr) Procédé de localisation d&#39;action, dispositif, équipement électronique et support de stockage lisible par ordinateur
WO2019203421A1 (fr) Dispositif d&#39;affichage et procédé de commande de dispositif d&#39;affichage
WO2022265127A1 (fr) Système de prédiction du taux de désaffection d&#39;utilisateurs et de pistage de connaissances d&#39;utilisateurs basé sur un apprentissage par intelligence artificielle, et procédé pour son exploitation
WO2020251073A1 (fr) Dispositif de massage
WO2020001031A1 (fr) Procédé et système de recommandation scolaire et de majeure
WO2024005330A1 (fr) Procédé, appareil, et système basés sur l&#39;intelligence artificielle permettant de fournir un service de détermination et de gestion d&#39;état d&#39;engagement professionnel
WO2024005329A1 (fr) Procédé, appareil et système pour fournir un service de retour d&#39;évaluation de membre basé sur l&#39;intelligence artificielle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21837798

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21837798

Country of ref document: EP

Kind code of ref document: A1