WO2022010255A1 - Method, system, and computer-readable medium for deriving in-depth questions for automated evaluation of interview video by using machine learning model - Google Patents

Method, system, and computer-readable medium for deriving in-depth questions for automated evaluation of interview video by using machine learning model Download PDF

Info

Publication number
WO2022010255A1
WO2022010255A1 PCT/KR2021/008644 KR2021008644W WO2022010255A1 WO 2022010255 A1 WO2022010255 A1 WO 2022010255A1 KR 2021008644 W KR2021008644 W KR 2021008644W WO 2022010255 A1 WO2022010255 A1 WO 2022010255A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
derived
question
evaluation
evaluator
Prior art date
Application number
PCT/KR2021/008644
Other languages
French (fr)
Korean (ko)
Inventor
유대훈
이영복
Original Assignee
주식회사 제네시스랩
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 제네시스랩 filed Critical 주식회사 제네시스랩
Publication of WO2022010255A1 publication Critical patent/WO2022010255A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/105Human resources
    • G06Q10/1053Employment or hiring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06315Needs-based resource requirements planning or analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06316Sequencing of tasks or work
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0633Workflow analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0637Strategic management or analysis, e.g. setting a goal or target of an organisation; Planning actions based on goals; Analysis or evaluation of effectiveness of goals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/103Workflow collaboration or project management

Definitions

  • the present invention relates to a method, system and computer-readable medium for deriving an in-depth question for automated evaluation of an interview image using a machine learning model, and more particularly, a question about a specific competency to be evaluated to an evaluator to receive the response image performed by the evaluated by presenting and derive the first output information based on the corresponding response image, based on a plurality of behavioral indicators for the specific competency and one or more derived behavioral indicators included in the first output information to draw in-depth questions that can elicit the evaluator's answers to the behavioral indicators and incomplete behavioral indicators not included in the derived behavioral indicators, receive the video answering the in-depth questions from the evaluator, and finally evaluate the specific competency It relates to a method, a system and a computer-readable medium for doing so.
  • NCS National Competency Standards
  • the conventional methods for evaluating competency must be performed by an evaluator who has received specialized education on the evaluation method or has abundant experience. There is a problem in that it takes a lot of time to perform the evaluation, because it costs a lot of money, and the evaluator directly performs detailed procedures for evaluation even when the evaluation is conducted by an expert.
  • the evaluator determines that there is no content related to the competency in the respondent's answer in order to evaluate the competency of the evaluator, the evaluator creates a related question so that the evaluator can answer the relevant content and asks the question again This process requires more time for evaluation due to the additional process of presenting the question again and analyzing the answer.
  • the present invention relates to a method, system and computer-readable medium for deriving an in-depth question for automated evaluation of an interview image using a machine learning model, and more particularly, a question about a specific competency to be evaluated to an evaluator to receive the response image performed by the evaluated by presenting and derive the first output information based on the corresponding response image, based on a plurality of behavioral indicators for the specific competency and one or more derived behavioral indicators included in the first output information to draw in-depth questions that can elicit the evaluator's answers to the behavioral indicators and incomplete behavioral indicators not included in the derived behavioral indicators, receive the video answering the in-depth questions from the evaluator, and finally evaluate the specific competency
  • An object of the present invention is to provide a method, a system and a computer-readable medium for performing the
  • an embodiment of the present invention provides an automated evaluation method of an evaluator based on a behavioral indicator performed in a server system, wherein the server system includes a plurality of behavioral indicators and a plurality of questions for a specific capability.
  • each of the plurality of behavior indicators is characterized in that it has a correlation with one or more of the plurality of questions
  • the automated evaluation method includes: The first question providing step of providing one or more to the evaluated, and the evaluation of the specific competency of the evaluated by inputting the image of the answer performed by the evaluated person to the one or more questions provided in the first question providing step into the machine learning model a general question step including a first output information derivation step of deriving first output information including information and a derived behavioral indicator related to the evaluation information; an in-depth question setting step of setting one or more in-depth questions based on the one or more derived behavior indicators after the general question step is performed one or more times; and a competency evaluation step of performing an evaluation of the specific competency based on the answer image performed by the evaluator to the in-depth question and the first output information derived in the first output information derivation step; including, automated evaluation provide a way
  • the competency evaluation step includes a second question providing step of providing one or more of the in-depth questions set in the in-depth question setting step to the evaluated subject, and one or more in-depth questions provided in the second question providing step Second output information for deriving second output information including evaluation information on the specific competency of the evaluated person and a derived behavioral indicator related to the evaluation information by inputting the image of the answer performed by the evaluated person to the machine learning model an in-depth question step including a derivation step; and a comprehensive evaluation information derivation step of deriving comprehensive evaluation information for the specific competency of the evaluated person based on the first output information and the second output information.
  • the in-depth question setting step includes a plurality of behavior indicators set for the specific competency and one or more derived behavior indicators derived through the general question step among the plurality of behavior indicators. It is possible to determine one or more in-depth questions to determine a behavioral indicator that is not derived by the derived behavioral indicator, and to elicit an answer related to the behavioral indicator that is not derived from the derived behavioral indicator by the evaluated person.
  • the in-depth question setting step includes a plurality of behavior indicators set for the specific competency and one or more derived behavior indicators derived through the general question step among the plurality of behavior indicators. It is possible to determine one or more in-depth questions to determine a behavioral indicator that has been derived as a derived behavioral indicator, but does not meet the preset discrimination criteria, as an incomplete behavioral indicator, and allow the evaluator to derive an answer related to the incomplete behavioral indicator.
  • the first output information derived in the first output information derivation step is input to a machine learning-based deep question recommendation model to allow the evaluator to use the derived behavior index as the derived behavior index.
  • One or more in-depth questions can be derived to elicit answers related to behavioral indicators that have not been derived.
  • the step of deriving the first output information and the step of deriving the second output information includes separating image information and audio information from the answer image performed by the evaluator, and each of the separated image information and audio information can be pre-processed and input to the machine learning model.
  • the step of deriving the first output information and the step of deriving the second output information may include: deriving text information based on the answer image performed by the evaluator; performing embedding expressing the derived text information as a vector; and inputting the embedded vector into the machine learning model.
  • the first output information derived in the first output information derivation step and the second output information derived in the second output information derivation step are the discovery of the derivation behavior index related to the evaluation information
  • the text information of the answer image performed by the subject corresponding to the probability information and the discovery probability information may be further included.
  • the comprehensive evaluation information derived in the comprehensive evaluation information deriving step is the discovery probability information for each of the derived behavior indicators derived in the first output information deriving step and the second output information deriving step. It may include a score for the specific competency calculated by synthesizing them.
  • the comprehensive evaluation information derived in the step of deriving the comprehensive evaluation information is the discovery probability information, text information, and corresponding answer for the derived behavioral indicators included in the first output information and the second output information. It may include a score for the specific competency derived based on one or more information among the basic score information for the image and the feature information generated to derive the first output information and the second output information from the machine learning model. have.
  • the comprehensive evaluation information derived in the comprehensive evaluation information deriving step is the result information of pre-processing for each answer image input in the first output information deriving step and the second output information deriving step It may include a score for the specific competency calculated by synthesizing them.
  • the server system for performing an automated evaluation method of a person to be evaluated based on a behavior indicator, includes a plurality of behavior indicators and a plurality of questions for a specific capability.
  • each of the plurality of behavioral indicators is characterized in that it has a correlation with one or more of the plurality of questions, and provides one or more of the preset questions for performing the evaluation of the specific competency to the evaluator Studying the first question and answering the one or more questions provided in the first question-providing step are inputted into the machine learning model, and the evaluation information for the specific competency of the evaluated person and the evaluation information are related a general question unit including a first output information derivation unit for deriving first output information including a derivation behavior indicator; an in-depth question setting unit configured to set one or more in-depth questions based on the derived one or more derived behavior indicators after the general question unit is operated one or more times; and a competency evaluation unit that evaluates the specific competency based on the first output information derived from the image and the first output information derived from the answer image performed by the evaluator to the in-depth question; a server system including a to provide.
  • a plurality of behavior indicators and a plurality of questions are preset for a specific capability in the server system, each of the plurality of behavior indicators is characterized in that it has a correlation with at least one of the plurality of questions, and the automation
  • the first question-providing step of providing one or more of the preset questions for performing the evaluation of the specific competency to the assessee, and the one or more questions provided in the first question-providing step are performed by the assessee
  • a general question including a first output information derivation step of inputting an answer image into a machine learning model and deriving first output information including evaluation information on the specific competency of the evaluated person and a derived behavioral indicator related to the evaluation information step; an in-depth question setting step of setting one or more in-depth questions based on
  • the evaluation result is derived based on the image of the respondent's answer through the machine learning model for performing the evaluation of a specific competency, the time and cost required for the evaluation are reduced and the objective evaluation is at the same time It can have an effect that can lead to results.
  • the evaluation interface provided to the evaluator in the step of providing the evaluation interface includes a script layer, and the script layer displays a script according to the evaluator's answer image, so that the evaluator easily recognizes the evaluator's answer can be effective.
  • the behavior index list area for the corresponding question or specific competency is displayed. It is possible to exert the effect that the evaluator can easily select.
  • the evaluation interface includes a behavior indicator layer in which a specific region of a script selected by the evaluator from the script layer and a specific behavior indicator list of a behavior indicator list region selected from the behavior indicator list region is displayed, so that the evaluator It is possible to exert the effect of easily grasping the respondent's answer for each behavioral indicator.
  • the evaluation interface includes an in-depth question layer to which the evaluator receives an in-depth question according to the answer image of the evaluator and a singularity layer to receive special items about the answer image of the evaluator, so that the evaluator In the case of receiving education on the evaluation method, it can exert the effect of comparing in-depth questions and peculiarities written by experts on the evaluation method.
  • the evaluation result is derived by separating the image information and the audio information from the answer image of the evaluated person, and inputting the respective image information and the audio information into the machine learning model, the context in the answer image of the evaluated person And it is possible to exert the effect of deriving an accurate evaluation result by grasping the intent of the answer in detail.
  • the second evaluator capability information derived in the capability information derivation step through the machine learning model includes the discovery probability information for each behavioral indicator, the effect of objectively providing the evaluation result is obtained.
  • the second assessee capability information derived from the capability information derivation step through the machine learning model further includes text information in the respondent's answer image corresponding to the discovery probability information for each behavioral indicator. In other words, it can have the effect of providing concretely the respondent's answer corresponding to the behavioral indicator.
  • an in-depth question is set based on the derived behavioral indicator included in the first output information derived in the first output information derivation step and a plurality of behavioral indicators for specific capabilities, it is not observed without an evaluator It can have the effect of providing in-depth questions that can lead to answers to behavioral indicators that are not yet evaluated.
  • the comprehensive evaluation information derived in the comprehensive evaluation information derivation step includes a score for a specific competency calculated by synthesizing the discovery probability information in the first output information and the second output information, intuitively It can exert the effect of recognizing the evaluation result of the person being evaluated.
  • FIG. 1 schematically shows the form of an overall system for performing a method for providing an automated evaluation of an interview image using a machine learning model according to an embodiment of the present invention.
  • FIG. 2 schematically shows an internal configuration of a server system according to an embodiment of the present invention.
  • FIG. 3 schematically illustrates a configuration of a behavioral index set according to a specific competency to be evaluated and a question provided to a subject to be evaluated according to an exemplary embodiment of the present invention.
  • FIG. 4 schematically illustrates a method of providing an automated evaluation of an interview image using a machine learning model performed in a server system according to an embodiment of the present invention.
  • FIG. 5 schematically illustrates a screen in which an evaluator performs an answer to a question according to an embodiment of the present invention.
  • FIG 6 schematically shows the configuration of an evaluation interface according to an embodiment of the present invention.
  • FIG. 7 schematically shows a configuration in which a behavior indicator layer is displayed according to the selection of an evaluator in the script layer according to an embodiment of the present invention.
  • FIG 8 schematically shows the configuration of another type of evaluation interface according to an embodiment of the present invention.
  • FIG. 9 schematically illustrates a process of learning a machine learning model according to the model learning step according to an embodiment of the present invention.
  • FIG. 10 schematically shows a detailed configuration of a capability information derivation unit according to an embodiment of the present invention.
  • FIG. 11 schematically illustrates a method of deriving an in-depth question for automated evaluation of an interview image performed in a server system according to an embodiment of the present invention.
  • FIG 13 schematically shows detailed steps of the deep question setting step implemented in another method according to an embodiment of the present invention.
  • 16 schematically shows a configuration for deriving output information from a machine learning model by inputting an image of an answer performed by an evaluator to a machine learning model according to an embodiment of the present invention.
  • FIG. 17 schematically shows a configuration for setting an in-depth question according to output information derived by inputting an image of an answer performed by a subject to be evaluated into a machine learning model, and deriving comprehensive evaluation information according to an embodiment of the present invention.
  • FIG. 18 schematically illustrates a configuration for deriving comprehensive evaluation information by further including feature information derived from a machine learning model to which an image of an answer performed by an evaluated person is input according to an embodiment of the present invention.
  • FIG. 19 schematically shows the internal configuration of a feature extraction model according to an embodiment of the present invention.
  • FIG. 21 schematically illustrates an internal configuration of a computing device according to an embodiment of the present invention.
  • first, second, etc. may be used to describe various elements, but the elements are not limited by the terms. The above terms are used only for the purpose of distinguishing one component from another.
  • a first component may be referred to as a second component, and similarly, a second component may also be referred to as a first component. and/or includes a combination of a plurality of related listed items or any of a plurality of related listed items.
  • a "part” includes a unit realized by hardware, a unit realized by software, and a unit realized using both.
  • one unit may be implemented using two or more hardware, and two or more units may be implemented by one hardware.
  • ' ⁇ unit' is not limited to software or hardware, and ' ⁇ unit' may be configured to be in an addressable storage medium or to reproduce one or more processors. Accordingly, as an example, ' ⁇ ' indicates components such as software components, object-oriented software components, class components, and task components, and processes, functions, properties, and procedures. , subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays and variables.
  • components and ' ⁇ units' may be combined into a smaller number of components and ' ⁇ units' or further separated into additional components and ' ⁇ units'.
  • components and ' ⁇ units' may be implemented to play one or more CPUs in a device or secure multimedia card.
  • the 'evaluator's terminal', 'first evaluated terminal', 'second evaluated terminal' and 'evaluator's terminal' mentioned below may be implemented as a computer or portable terminal that can access a server or other terminal through a network.
  • the computer includes, for example, a laptop, a desktop, a laptop, etc. equipped with a web browser (WEB Browser), and the portable terminal is, for example, a wireless communication device that guarantees portability and mobility.
  • PCS Personal Communication System
  • GSM Global System for Mobile communications
  • PDC Personal Digital Cellular
  • PHS Personal Handyphone System
  • PDA Personal Digital Assistant
  • IMT International Mobile Telecommunication
  • CDMA Code) Division Multiple Access
  • W-CDMA W-Code Division Multiple Access
  • W-CDMA Wireless Broadband Internet
  • network refers to a wired network such as a local area network (LAN), a wide area network (WAN), or a value added network (VAN), or a mobile radio communication network or satellite. It may be implemented in any kind of wireless network such as a communication network.
  • the method of deriving deep questions for automated evaluation performed in the server system is to input the answer image into the machine learning model in the method of providing automated evaluation of the interview image using the machine learning model performed in the server system. It may correspond to a specific method for deriving an evaluation result.
  • the machine learning model is trained based on the information evaluated by the evaluator and , the overall method of deriving the evaluation result for the interview image through the machine learning model learned will be explained first.
  • FIG. 1 schematically shows the form of an overall system for performing a method for providing an automated evaluation of an interview image using a machine learning model according to an embodiment of the present invention.
  • the method for providing an automated evaluation of an interview image is performed by the server system 1000, and the server system 1000 includes an evaluator terminal 2000 corresponding to an external terminal to perform the method. ), the first terminal to be evaluated (3000), the second terminal to be evaluated (4000), and the terminal in charge of evaluation and education (5000) can communicate with each other.
  • the server system 1000 may include one or more servers, and each server may perform a method of providing an automated evaluation of an interview image by performing communication.
  • the evaluator terminal 2000 corresponds to a terminal used by the evaluator, which is a subject who performs evaluation based on the image of the answeree of the evaluator.
  • the evaluator receives an answer image from the server system 1000 through the evaluator terminal 2000 and performs evaluation.
  • the first evaluator competency information corresponding to the information evaluated by the evaluator may be used as learning data of a machine learning model to be described later.
  • the evaluator may correspond to a subject who performs an evaluation based on the interview image of the evaluator in a simulated manner in order to receive education on the evaluation method of the present invention
  • the first evaluator competency information is the evaluation education officer terminal (5000) It can be used as information for educating the evaluation method by the person in charge of evaluation education, who is the subject of use.
  • the first evaluator terminal 3000 corresponds to a terminal used by the first evaluator, who is the subject of answering a question provided through the server system 1000 . Specifically, the first evaluator receives one or more questions from the server system 1000 and answers the first evaluator terminal 3000 according to each suggested question, and the answer image performed by the first evaluator is the server system (1000). On the other hand, the first evaluator capability information can be derived by the evaluator receiving the answer image transmitted to the server system 1000 through the evaluator terminal 2000 and performing the evaluation as described above.
  • the second evaluator terminal 4000 corresponds to a terminal used by the second evaluator, who is the subject of answering the question provided through the server system 1000 .
  • the second evaluator receives one or more questions from the server system 1000 and answers the questions on the second evaluator terminal 4000 according to each suggested question, and the answer image performed by the second evaluator is the server system (1000).
  • the answer image performed by the second evaluator transmitted to the server system 1000 is input to the machine learning model, and the second evaluator capability information corresponding to the automated evaluation result for the answer image in the server system 1000 is provided.
  • the number of terminals to be evaluated communicating with the server system 1000 shown in FIG. 1 is only shown for ease of explanation, and the server system 1000 can communicate with one or more terminals to be evaluated. have.
  • the answer image performed by the first evaluator in the first evaluator terminal 3000 is not limited only to the evaluator performing the evaluation through the evaluator terminal 2000, and as described above, the machine of the server system 1000
  • the response image performed by the first evaluator may be input to the learning model to derive second evaluator competency information.
  • the answer image performed by the second evaluator in the second evaluator terminal 4000 is not limited to deriving the second evaluator competency information through the server system 1000, but is performed by the second evaluator.
  • the answer image may be provided to the evaluator and used for the evaluator to perform the evaluation.
  • the first and second assessees are for ease of explanation, and the descriptions of the first and the second do not imply a difference in configuration.
  • the evaluation education officer terminal 5000 is a terminal used by the evaluation education officer corresponding to the subject with expertise in the evaluation method based on the answer image.
  • the evaluation is performed through the evaluation education manager terminal 5000 and the evaluation result is transmitted to the server system 1000, and the evaluation result is provided to the subject receiving the evaluation method of the present invention, so that the subject can simulate Make it possible to compare the evaluation results with those evaluated by the person in charge of evaluation education.
  • the account types corresponding to each evaluator, the subject and the person in charge of evaluation and education exist on the server system 1000, and the account type corresponding to each of the rater, the subject and the person in charge of evaluation and education is used in a specific terminal. Communication with the server system 1000 may be performed, and the specific terminal may receive information corresponding to each account type and provide it to each subject.
  • FIG. 2 schematically shows an internal configuration of a server system 1000 according to an embodiment of the present invention.
  • the server system 1000 includes an evaluation interface providing unit 1100 , a competency information receiving unit 1200 , a model learning unit 1300 , a question providing unit 1400 , a competency information deriving unit 1500 and DB 1600 may be included.
  • the evaluation interface providing unit 1100 provides the evaluator with an image of the answer performed by the evaluator through the evaluator terminal, and provides an evaluation interface through which the evaluator receives the first evaluator competency information. Accordingly, the evaluator can view the answer image performed by the evaluator through the evaluation interface displayed on the evaluator terminal 2000 and simultaneously check the contents of his or her evaluation.
  • the competency information receiving unit 1200 receives the first evaluator competency information input by the evaluator through the evaluation interface from the evaluator terminal 2000 .
  • the model learning unit 1300 may play a role of learning the machine learning model, and for this purpose, the first subject competency information received from the competency information receiving unit 1200 may be used as learning data. More specifically, the model learning unit 1300 may train the machine learning model by processing the first subject competency information into a form suitable for learning the machine learning model.
  • the question providing unit 1400 provides one or more preset questions to the evaluator in order to evaluate the answer image for a specific capability in the server system 1000 . More specifically, the question providing unit 1400 may provide one or more questions related to the competency selected by the evaluated person, the company supported by the evaluated person, or the competency corresponding to the job of the company to the evaluated person.
  • the competency information derivation unit 1500 derives second evaluator competency information based on the image of the answer performed by the second evaluator to the question provided through the question providing unit 1400 .
  • the capability information derivation unit 1500 may derive the second assessee capability information by inputting the image of the answer performed by the second assessee to the machine learning model.
  • the competency information derivation unit 1500 sets an in-depth question related to the specific behavioral indicator, Comprehensive evaluation information corresponding to the second subject competency information may be derived by further considering the response image, which will be described in more detail with reference to FIG. 10 .
  • a machine learning-based deep question recommendation model for setting deep questions in the capability information derivation unit 1500 may be additionally stored in the DB 1600 .
  • the machine learning model is a machine-learning model for evaluation based on the answer image, and preferably, the machine learning model is individually provided for each competency to be evaluated, and the server system 1000 includes a plurality of machine learning models. Models may also be included.
  • the server system 1000 may include two or more servers, each server includes some of the above-described configurations, and each server performs communication to create a machine learning model. It is also possible to perform a method of providing an automated evaluation of the interview image by using it. For example, the functions provided to the evaluator or the evaluator are included in a specific server, and the machine learning model and the functions for learning the machine learning model are included in another specific server, so that communication between the specific server and the other specific server is performed. Through the use of the machine learning model of the present invention, a method of providing an automated evaluation of an interview image can be performed.
  • FIG. 3 schematically illustrates a configuration of a behavioral index set according to a specific competency to be evaluated and a question provided to a subject to be evaluated according to an exemplary embodiment of the present invention.
  • one or more behavioral indicators and one or more questions may be set for each competency to be evaluated in order to evaluate the competency of the subject.
  • the behavioral indicator is an evaluation standard for evaluating the competency, and the evaluator can evaluate the extent to which the evaluated person possesses the corresponding competency by checking the responses in which the behavioral indicator is observed in the evaluated answer.
  • the question is designed so that one or more behavioral indicators can be observed in the respondent's answer. For example, in the respondent's answer to the question of 'How did you resolve conflicts between team members?', a behavioral index of 'Inducing team members to collaborate for the purpose of the team' can be observed.
  • each question may be designed in a form that can induce answers to one or more of a situation, a task, an action, and a result.
  • the questions designed for each competency as described above may be provided to the first evaluated or the second evaluated through the first evaluated terminal 3000 or the second evaluated terminal 4000 through the question providing unit 1400, respectively.
  • the question providing unit 1400 is the company to which the appraiseee has applied, the company that wants to conduct a mock interview, or Depending on the job of the company to which the applicant has applied or the company for which the mock interview is to be conducted, appropriate questions may be provided to the subject to be evaluated.
  • FIG. 4 schematically illustrates a method of providing an automated evaluation of an interview image using a machine learning model performed by the server system 1000 according to an embodiment of the present invention.
  • the server system 1000 provides the evaluation interface including the response image performed by the first evaluator to the evaluator, the evaluation interface providing step (S10).
  • the first evaluated person may request the evaluation from the server system 1000 through the first evaluated terminal 3000 , and the question providing unit 1400 of the server system 1000 may ask one or more questions corresponding to the request. is provided to the first evaluator terminal 3000 so that the first evaluator can generate an answer image.
  • the response image performed by the first evaluator generated in this way may be transmitted to the server system 1000 and stored in the DB 1600 .
  • the server system 1000 may perform the evaluation interface providing step ( S10 ).
  • the evaluation interface including the answer image performed by the first evaluator corresponding to the request of the evaluator is displayed on the evaluator terminal 2000 .
  • the evaluator is the first evaluator competency including the evaluation information on the specific competency related to the answer image performed by the first evaluator through the evaluation interface displayed on the evaluator terminal 2000 and the behavioral index corresponding to each evaluation information.
  • Information is input, and the evaluator terminal 2000 transmits the input first evaluator competency information to the server system 1000, and the competency information receiving unit 1200 of the server system 1000 performs the competency information receiving step ( S11) is performed to receive the first assessee competency information.
  • the evaluation information included in the first evaluator's competency information may correspond to information for observing the corresponding behavioral index among the answers of the first evaluator.
  • the model learning step (S12) may learn a machine learning model for a specific competency based on the plurality of first evaluator capability information received through the capability information receiving step (S11). Since the server system 1000 includes one or more machine learning models that perform evaluation for each competency, when learning a machine learning model of a specific capability in the model learning step (S12), the first Separate labeling is performed using the subject competency information as learning data, or the model learning step (S12) is performed so that the first evaluator competency information for a specific competency and the first evaluator competency information for other competencies other than the specific competency are distinguished. , the labeled first assessee competency information can be used as learning data.
  • the machine learning model may use the answer images corresponding to each of the first subject competency information used as the learning data as the learning data.
  • the server system 1000 provides a question providing step of providing one or more preset questions corresponding to the request to the second evaluator (S13) is performed.
  • the evaluator requests evaluation from the server system 1000 from the evaluator. It may correspond to a request divided into a request for direct evaluation and a request for evaluation through the machine learning model of the server system 1000, or to request both evaluation from the evaluator and evaluation through the machine learning model. may be
  • the request of the first evaluated or the second evaluated may include information requesting a specific company to be evaluated, a job of a specific company, or a specific competency.
  • the second evaluator who has requested evaluation through the question providing step S13 is provided with one or more questions and generates an answer image through the second evaluator terminal 4000 .
  • the second evaluator terminal 4000 transmits the generated answer image performed by the second evaluator to the server system 1000, and the server system 1000 performs machine learning on the received reply image performed by the second evaluator.
  • the capability information derivation step (S14) of deriving the second evaluator capability information by input to the model is performed.
  • the second assessee capability information derived through the capability information derivation step (S14) is capability information derived based on the answer image performed by the second assessee in the server system 1000 by itself, and is evaluated by the evaluator In such a way as to derive information in a form similar to that of a first evaluator’s competency information or include information on the probability of discovery of one or more behavioral indicators for a specific competency to be evaluated in the response image performed by the second evaluator, the first It may be derived in a different form from the subject's competency information.
  • the first output information is derived with respect to the answer image performed by the second evaluated person, and one or more derived behavior indicators included in the first output information and a plurality of specific capabilities to be evaluated Based on the behavioral indicators of By deriving the second output information for the response image to , it is possible to finally derive the comprehensive evaluation information corresponding to the second subject competency information, which will be described in more detail with reference to FIG. 11 .
  • the plurality of second evaluator competency information derived in the competency information derivation step (S14) is input to the comprehensive machine learning model, and the score information on the degree of retention of the specific competency of the second evaluator is obtained. It may further include; a comprehensive competency information derivation step (S15) of deriving the comprehensive subject competency information that includes.
  • the response image performed by the second evaluator is input to the machine learning model to derive the second evaluator competency information, and the second evaluator's specific competency is evaluated may be made, but in another embodiment of the present invention, in the capability information derivation step (S14), the second assessee capability information for each answer image performed by the second assessee for each of a plurality of questions about a specific capability is derived. .
  • the comprehensive subject competency information is derived based on the plurality of second evaluator competency information derived in the step of deriving the competency information (S14).
  • the plurality of second subject competency information may be input to the comprehensive machine learning model included in the server system 1000 to derive the comprehensive subject competency information.
  • the comprehensive evaluator competency information synthesizes a plurality of second evaluator competency information derived for each answer image to each of a plurality of questions provided to the second evaluator, and comprehensively calculates the degree of retention of a specific competency that the second evaluator intends to evaluate. Similar to the evaluation score input by the evaluator on the evaluation interface, the comprehensive evaluated competency information includes score information on the degree of possessing a specific competency. It can exert the effect of quantitatively recognizing the degree of possessing the specific competency of the evaluated person.
  • the comprehensive machine learning model corresponds to a separate machine learning-based model distinguished from the above-described machine learning model, or the comprehensive machine learning model and the machine learning model are included in the overall machine learning model, the machine learning
  • the second subject competency information derived from the model may be input to the comprehensive machine learning model to derive comprehensive subject competency information.
  • FIG. 5 schematically illustrates a screen in which an evaluator performs an answer to a question according to an embodiment of the present invention.
  • the first evaluated terminal 3000 or the second evaluated terminal 4000 is provided with one or more questions through the question providing step ( S13 ) performed by the server system 1000 to receive an answer image can create
  • the question providing step ( S13 ) one or more preset questions corresponding to the requests of the first or second evaluated persons are provided to the first evaluated terminal 3000 or the second evaluated terminal 4000 .
  • the question providing step S13 may provide questions related to one or more capabilities related to the job of the specific company.
  • the first terminal to be evaluated 3000 or the second terminal to be evaluated 4000 to which the question is provided captures an image of the person's answer to the question through a photographing module provided in the terminal.
  • the question provided in the question providing step (S13), the time limit for answering the question, and the answer progress time are displayed at the bottom, and at the top shows a configuration in which an image of the respondent's answer is displayed in real time.
  • the present invention is not limited thereto, and the question is displayed first and then the screen is switched to display only the real-time answer image of the subject, or the question is provided not only in text form but also in sound form.
  • a screen may be configured in various display methods.
  • the first evaluated terminal 3000 and the second evaluated terminal 4000 generate an image of an answer performed by the evaluated person to one or more questions provided through the question providing step S13, and the generated answer By transmitting the image to the server system 1000 , evaluation of the corresponding answer image may be performed.
  • FIG 6 schematically shows the configuration of an evaluation interface according to an embodiment of the present invention.
  • the evaluation interface may be displayed on the evaluator terminal 2000 through the step S10 of providing the evaluation interface performed by the server system 1000 .
  • the evaluation interface elements for the evaluator to evaluate based on the answer image performed by the first evaluator are displayed, and the corresponding answer image can be evaluated according to the evaluator's input.
  • the evaluation interface includes an answer image layer L1 in which an answer image performed by the first evaluated person is displayed.
  • the corresponding answer image is reproduced according to the replay input of the evaluator on the answer image layer (L1), so that the evaluator can check the contents of the answer image.
  • a question about the answer image more specifically, the question provided to generate the answer image in the question providing step (S13) is displayed in text form, so that the evaluator can It is possible to recognize more clearly which question the answer image is generated by.
  • the evaluation interface provided to the evaluator in the step of providing the evaluation interface (S10) includes a script layer (L2) in which a script generated based on the answer image performed by the first evaluator is displayed; including, the script layer In (L2), when the evaluator selects a specific area of the script, a behavioral indicator list area A1 including one or more behavioral indicators corresponding to the corresponding question or the specific competency may be displayed.
  • a script layer (L2) in which a script generated based on the answer image performed by the first evaluator is displayed; including, the script layer In (L2), when the evaluator selects a specific area of the script, a behavioral indicator list area A1 including one or more behavioral indicators corresponding to the corresponding question or the specific competency may be displayed.
  • the script layer (L2) displays a script in which the content of the answer image displayed on the answer image layer (L1) is converted into text form.
  • the server system 1000 may include a Speech to Text (STT) module that converts audio information of an answer image into text information, and may derive a script for the answer image through the STT module.
  • the server system 1000 further includes a video and audio separation module to separate the video information and audio information of the answer image through the video and audio separation module, and to separate the audio information from the STT module. You can also derive a script by inputting it into . Therefore, the evaluator can clearly grasp the voice that is not clearly recognized in the answer image reproduced in the answer image layer (L1) in the form of text through the script layer (L2).
  • the script is generated by not only the STT module but also the evaluator playing the corresponding answer image on the script layer (L2) and directly inputting the script, or the script primarily generated in the STT module is on the script layer (L2) is displayed, and the evaluator may finally generate the script by correcting the content of the primarily generated script.
  • a specific area of the script may be selected by an input such as drag performed by an evaluator, and a specific area of the script is selected in the script layer (L2).
  • a behavior indicator list area (A1) containing one or more behavior indicators related to a question or a specific competency to be evaluated is displayed.
  • the evaluator may select a behavior indicator related to a specific region of the script selected by the evaluator, and the specific region of the selected script may be displayed on a behavior indicator layer L6 to be described later. This will be described later with reference to FIG. 7 .
  • the evaluation interface includes a score evaluation layer L3, and the score evaluation layer L3 allows the evaluator to receive a comprehensive evaluation score of a specific competency for the corresponding answer image.
  • the score evaluation layer L3 allows the evaluator to receive a comprehensive evaluation score of a specific competency for the corresponding answer image.
  • the evaluator selects the evaluation score area displayed on the score evaluation layer L3, one or more preset evaluation scores are displayed.
  • the preset one or more evaluation points may display one or more evaluation points set at 0.5 point intervals in a range of 1 to 5 points.
  • the corresponding evaluation score may be input and displayed on the score evaluation layer L3.
  • the evaluation interface provided to the evaluator allows the evaluator to display the specific behavioral indicator when a specific behavioral indicator corresponding to the question or the specific competency is not observed in the script.
  • a deep question layer (L4) that receives a separate in-depth question for deriving; and a singularity layer (L5) for allowing the evaluator to input specific details about the answer image performed by the first evaluated person.
  • the in-depth question layer (L4) is a function for observing a specific behavioral indicator when the evaluator determines that a specific behavioral indicator is not observed among one or more behavioral indicators for a specific competency that the evaluator wants to evaluate with respect to the corresponding answer image.
  • In-depth questions to elicit answers can be input from the evaluator.
  • the in-depth question layer L4 may additionally receive the contents that the evaluator wants to ask the first evaluator in addition to the above-described in-depth question.
  • the specific item layer (L5) may receive a specific item for the answer image displayed on the answer image layer (L1) from the evaluator.
  • the evaluator can input specific information about the response image into the specific information layer (L5), such as 'the truth of the response content is doubtful when seeing a person showing embarrassment such as blurring the end of speech' for the response image.
  • the input specific information may be included in the first assessee competency information.
  • information input by the evaluator on the evaluation interface may be included in the above-described first subject competency information, the first evaluated capacity information is provided to the first evaluator, and the model learning step (S12) It can be used to train machine learning models in
  • the deep question input by the evaluator on the deep question layer (L4) can be used as learning data of the deep question recommendation model that derives the deep question according to the answer image of the evaluator, and more specifically, the deep question layer (L4) ), the in-depth question input by the evaluator corresponds to a question to elicit the second evaluator's answer to the behavioral indicator that was not observed in the answer image performed by the second evaluator. Behavioral indicators may be used as learning data of the deep question recommendation model.
  • an expert comparison element may be displayed on the evaluation interface in another embodiment of the present invention.
  • the evaluator makes a selection input for the expert comparison element
  • the contents of the evaluation performed by the expert on the evaluation method of the present invention on the reply image displayed on the reply image layer (L1) of the evaluation interface will be displayed.
  • the contents input by the expert may be displayed corresponding to a specific area of the script for each of the one or more behavioral indicators. The contents can be compared with the evaluation contents of experts.
  • FIG. 7 schematically shows a configuration in which the behavior indicator layer L6 is displayed according to the selection of the evaluator in the script layer L2 according to an embodiment of the present invention.
  • the evaluation interface may further include a behavior indicator layer L6 in which text corresponding to a specific area of the script displayed on the script layer L2 selected by the evaluator is displayed.
  • the evaluation interface further includes a behavior indicator layer (L6), and the behavior indicator layer (L6) may display text corresponding to a specific area (B1) of the script selected by the evaluator. More specifically, when the evaluator selects (B1) a specific area of the script displayed on the script layer (L2), the action index list area (A1) including one or more action indicators corresponding to the question or the competency to be evaluated ) is displayed on the script layer L2, and when the evaluator selects a specific behavior indicator B2 from the behavior indicator list area A1, the specific region of the script selected by the evaluator on the behavior indicator layer L6 Text corresponding to (B1) may be displayed.
  • a behavior indicator layer L6
  • the behavior indicator layer (L6) may display text corresponding to a specific area (B1) of the script selected by the evaluator.
  • the behavior indicator layer (L6) when the evaluator selects a specific behavior indicator (B2) from the behavior indicator list region (A1) displayed for the specific region (B1) of the selected script, the behavior Text corresponding to the specific area B1 of the selected script may be displayed at a position corresponding to the specific behavior indicator B2 displayed on the indicator layer L6.
  • the behavioral indicator layer L6 includes the specific behavioral indicator B2 selected by the evaluator and the corresponding specific behavioral indicator.
  • the text corresponding to the specific area B1 of the script is displayed. In Fig.
  • the behavior indicator layer (L6) one or more behavior indicators corresponding to a question related to an answer image or a competency to be evaluated are displayed in advance, and the evaluator selects a specific behavior indicator from the behavior indicator list area (A1).
  • the text of a specific area of the script selected by the evaluator may be displayed at a position (bottom in FIG.
  • the evaluator can conveniently select a behavior index matching the selected area on the script, and since the selected action index and a specific area of the script are separately displayed on the behavior index layer (L6), the evaluator can In carrying out the evaluation based on the response image, the time required for structuring the directly evaluated responses according to the behavioral indicators can be saved, and the effect of performing the evaluation more smoothly can be exhibited.
  • FIG 8 schematically shows the configuration of another type of evaluation interface according to an embodiment of the present invention.
  • the form of the evaluation interface provided to the evaluator is not limited to the form shown in FIG. 6 , and may be configured in the form shown in FIG. 8 or other form and provided to the evaluator.
  • the answer image layer (L10) and the script layer (L11) are located at the top of the evaluation interface, and the evaluator is displayed on the answer image and script layer (L11) displayed on the answer image layer (L10) Based on the script to be used, the specific content of the script for each behavioral indicator can be selected. On the other hand, when the evaluator selects a specific area of the script on the script layer L11, the behavior index list area A10 may be overlaid on the script layer L11.
  • the content input by the evaluator is configured to be inputted at the bottom of the evaluation interface. Therefore, the evaluator is configured to check the contents of the respondent's response image located at the top of the evaluation interface and input the contents of the answer image in the lower area of the evaluation interface.
  • the behavior indicator layer L12 is located at the lower left of the evaluation interface, so that the contents of the script for each behavior indicator input by the evaluator on the script layer L11 can be displayed.
  • an in-depth question layer (L13), a singularity layer (L14), and a score evaluation layer (L15) are sequentially arranged, and after the evaluator enters an in-depth question and a specific matter, finally the corresponding answer image
  • An evaluation score may be input on the score evaluation layer L15, and the evaluation score input on the score evaluation layer L15 may be displayed in the area A11.
  • FIG. 9 schematically illustrates a process of learning a machine learning model according to the model learning step (S12) according to an embodiment of the present invention.
  • the model learning unit 1300 learns the machine learning model based on the above-described first subject competency information and is a reinforced machine.
  • a model learning step (S12) of updating the learning model is performed.
  • one or more behavior indicators corresponding to the specific competency included in the first evaluator competency information and the specific script selected by the evaluator for each behavior indicator You can learn by inputting an area.
  • the machine learning model may perform evaluation on a specific capability, and thus, the server system 1000 may include one or more machine learning models for each capability.
  • the first evaluator competency information for the specific competency that is, the first evaluator competency input by the evaluator to the response image performed by the first evaluator for the specific competency Only the information is used as learning data for learning the machine learning model, or in the model learning step ( S12 ), labeling is performed on the first subject competency information for each of a plurality of competencies, and the labeled first evaluator competency information is used as the learning data, and the first evaluator competency information for other competencies rather than the first evaluated competency information for the specific competency evaluated by the machine learning model may be used as the learning data.
  • one or more behavioral indicators corresponding to the specific competency included in the first evaluator competency information and a specific area selected by the evaluator in the script for each of the one or more behavioral indicators are input to the machine learning model through the machine learning model learned in the capability information derivation step (S14), it is possible to derive the second assessee competency information including the discovery probability information for each behavioral indicator.
  • the machine learning model can be trained by using the answer image corresponding to the first evaluator competency information as additional learning data, and the machine learning model learned according to this It is also possible to perform evaluation by analyzing facial expressions and emotions.
  • the machine learning model is learned using only one or more behavioral indicators corresponding to the specific competency included in the first evaluator competency information as learning data, and additionally, the second The machine learning model may be trained using, as additional learning data, a specific area of the script selected by the evaluator with respect to the evaluation score included in the first evaluator competency information or each behavioral index included in the first evaluator competency information.
  • model learning unit 1300 may learn the above-described deep question recommendation model, and specifically, the model learning unit 1300 uses the deep question input by the evaluator included in the first evaluator competency information as learning data.
  • the deep question recommendation model can be trained.
  • an in-depth question is set according to the answer image performed by the second evaluator in order to derive the second evaluated competency information, and the second evaluator performs the set in-depth question
  • the method of deriving the evaluation result by considering one answer image additionally will be explained.
  • the first and second evaluators Although it has been described separately as 1st evaluator competency information and 2nd evaluator competency information, the following describes the evaluation based on the answer image in the server system, so the evaluator described below may correspond to the above-mentioned second evaluator, The comprehensive evaluation information described below may correspond to the above-described second evaluator competency information or comprehensive evaluator capacity information.
  • FIG. 10 schematically shows a detailed configuration of the capability information derivation unit 1500 according to an embodiment of the present invention.
  • the steps for setting an in-depth question based on the answer image performed by the evaluator and deriving the comprehensive evaluation information according to the answer image performed by the evaluator for the in-depth question are the capability information derivation unit (1500).
  • the competency information derivation unit 1500 includes a general questioning unit 1510, and the general questioning unit 1510 primarily asks the evaluator one or more questions about the specific competency in order to evaluate the specific competency.
  • the person to be evaluated makes a request for evaluation of a specific competency through the terminal of the evaluated person, an interview evaluation for a company that the evaluated person wants to apply for, or an interview evaluation for a job at a company that the evaluated person wants to apply for.
  • the one or more questions may correspond to a question designed so that the one or more behavioral indicators can be observed in the respondent's answer to one or more behavioral indicators related to a specific competency.
  • the person to be evaluated may be provided with one or more questions provided by the first question-and-question study 1511 through the terminal of the person to be evaluated, and an image of answers to the one or more questions may be generated through the terminal of the person to be evaluated. Thereafter, the evaluator terminal transmits the generated answer image to the server system 1000 .
  • the first output information derivation unit 1512 derives first output information by inputting the image of the answer performed by the evaluator received from the server system 1000 into the above-described machine learning model. More specifically, the first output information may include evaluation information for the specific competency based on the answer image performed by the evaluated person through a machine learning model and a derived behavioral indicator related to the evaluation information.
  • the capability information derivation unit 1500 may further include an in-depth question setting unit 1520, and the in-depth question setting unit 1520 is the first output information derived from the first output information derivation unit 1512. Based on this, in-depth questions to be provided to the subject to be evaluated are derived.
  • the in-depth question setting unit 1520 allows the subject to be evaluated for a behavioral indicator that does not correspond to the derived behavioral indicator included in the first output information in the plurality of behavioral indicators corresponding to the specific capability. In-depth questions that can lead to answers related to behavioral indicators can be drawn.
  • the in-depth question setting unit 1520 may derive a question related to a behavioral indicator that does not correspond to an in-depth question from one or more questions related to a plurality of behavioral indicators for a specific capability preset in the server system 1000.
  • the competency information derivation unit 1500 may further include a competency evaluation unit 1530, and the competency evaluation unit 1530 provides the in-depth question derived from the in-depth question setting unit 1520 to the subject to be evaluated.
  • the evaluation of specific competencies is finally performed on the basis of the video answers performed by the evaluator to the in-depth questions.
  • the competency evaluation unit 1530 includes an in-depth questioning unit 1540 and a comprehensive evaluation information derivation unit 1550, and the in-depth questioning unit 1540 is one derived from the in-depth question setting unit 1520.
  • the second output information based on the answer image performed by the person to be evaluated according to the second question study 1541 and the second question study 1541 for providing the above in-depth questions to the person to be evaluated and at least one in-depth question provided by the second question study 1541 and a second output information derivation unit 1542 for deriving .
  • the first question study unit 1511 and the second question study unit 1541 are included in the question provision unit 1400 of the above-described server system 1000 to provide information on specific capabilities. Questions and in-depth questions may also be provided to the appraiseee.
  • the evaluator may be provided with one or more in-depth questions provided by the second question-and-question study 1541 through the subject terminal, and may generate an answer image to the one or more in-depth questions through the evaluator terminal. Thereafter, the evaluator terminal transmits an image answering the generated one or more in-depth questions to the server system 1000 .
  • the second output information derivation unit 1542 derives second output information by inputting an image of an answer performed by an evaluator to one or more in-depth questions received from the server system 1000 into the machine learning model.
  • the second output information includes the evaluation information for the specific competency based on the image of the answer performed by the evaluator to one or more in-depth questions through the machine learning model and the derived behavioral indicators related to the evaluation information can do.
  • the comprehensive evaluation information derivation unit 1550 performs a comprehensive evaluation based on the first output information derived from the first output information derivation unit 1512 and the second output information derived from the second output information derivation unit 1542 .
  • information is derived, and the comprehensive evaluation information may correspond to the above-described second subject competency information or comprehensive subject capacity information.
  • the competency information derivation unit 1500 in 'Method of providing automated evaluation of interview images using machine learning model' derives the subject's competency information based on the answer image performed by the evaluator, whereas in this configuration, A more reliable behavior-based interview-based evaluation can be performed because in-depth questions are derived according to the response image performed primarily by the evaluator, and the evaluation is performed by further considering the answer image performed by the evaluator for the in-depth question. have.
  • FIG. 11 schematically illustrates a method of deriving an in-depth question for automated evaluation of an interview image performed by the server system 1000 according to an embodiment of the present invention.
  • a plurality of behavioral indicators and a plurality of questions are preset for a specific capability in the server system 1000 . and each of the plurality of behavioral indicators is characterized in that it has a correlation with at least one of the plurality of questions, and the automated evaluation method answers at least one of the preset questions for performing the evaluation of the specific competency.
  • the first question providing step (S20) provided to the assessee and the response image performed by the assessee to one or more questions provided in the first question providing step (S20) are input into the machine learning model to input the specific competency of the assessee a general question step including a first output information deriving step (S21) of deriving first output information including evaluation information for and a derived behavioral indicator related to the evaluation information;
  • a general question step including a first output information deriving step (S21) of deriving first output information including evaluation information for and a derived behavioral indicator related to the evaluation information;
  • an in-depth question setting step (S22) of setting one or more in-depth questions based on the derived one or more derived behavior indicators; and a competency evaluation step of performing evaluation of the specific competency based on the image of the answer performed by the evaluator to the in-depth question and the first output information derived in the first output information derivation step (S21). have.
  • the server system 1000 provides one or more questions for performing an evaluation of a specific competency according to the request of the evaluator to the subject to be evaluated.
  • a first question-providing step (S20) is performed.
  • a plurality of behavior indicators and a plurality of questions related to each capability are preset in the server system 1000 for each capability, and each of the plurality of behavior indicators includes at least one of the plurality of questions and It is characterized by having a correlation.
  • one or more questions about the competency to be evaluated may be provided to the corresponding terminal to be evaluated, so that the person to be evaluated may generate an image answering the one or more questions.
  • the generated answer image performed by the evaluator may be transmitted to the server system 1000 and stored in the DB 1600 .
  • the first output information deriving step (S21) derives the first output information by inputting the image of the answer performed by the evaluated person to the machine learning model.
  • the first output information includes evaluation information for a specific competency to be evaluated and a derived behavioral index related to the evaluation information.
  • the evaluation information may include discovery probability information for each behavioral indicator related to the specific capability in the corresponding response image and text information about specific content of the response image related to each behavioral indicator.
  • the derived behavioral indicator is a behavioral indicator observed in the content of the answer image among a plurality of behavioral indicators related to a specific capability to be evaluated, and preferably, the discovery probability information for each behavioral indicator exceeds a predetermined value. It is also possible to derive a behavioral indicator that is used as a derived behavioral indicator.
  • the general question step including the step of providing the first question (S20) and the step of deriving the first output information (S21) may be repeatedly performed a plurality of times. For example, when there are a plurality of questions about a specific competency to be evaluated, the general question step may be repeatedly performed as many as the plurality of questions to derive first output information for each question.
  • the first question providing step (S20) provides a plurality of questions to the assessee at once, and the first output information derivation step (S21) is for an answer image to each question. In order to derive each of the first output information, it may be performed a plurality of times.
  • the deep question setting step (S22) derives one or more deep questions based on one or more derived behavior indicators included in the one or more first output information derived in the first output information derivation step (S21). More specifically, the in-depth question setting step (S22) includes one or more in-depth questions for eliciting an answer from the evaluator to a behavioral indicator that is not included in the one or more derived behavioral indicators among a plurality of behavioral indicators corresponding to a specific competency to be evaluated.
  • a question related to a behavioral indicator not included in the derived behavioral indicator is derived as an in-depth question, or a rule-based or machine learning In-depth questions can also be derived from the model.
  • the second question providing step (S23) and the second question providing step (S23) of providing one or more of the in-depth questions set in the deep question setting step (S22) to the assessee A method of deriving second output information including evaluation information on the specific competency of the evaluated person and a derived behavioral indicator related to the evaluation information by inputting the image of the answer performed by the evaluated person to the above in-depth question into the machine learning model 2 in-depth question step including the output information derivation step (S24); and a comprehensive evaluation information deriving step (S25) of deriving comprehensive evaluation information for the specific competency of the evaluated person based on the first output information and the second output information.
  • one or more in-depth questions derived in the in-depth question setting step (S22) are transmitted to the corresponding subject terminal and provided to the subject, and the subject is provided with the second question providing step (S23) ) can generate an answer image to one or more in-depth questions provided by the evaluator terminal.
  • the evaluator terminal transmits an image of an answer performed by the evaluator to the generated one or more in-depth questions to the server system 1000, and the server system 1000 may receive the answer image and store it in the DB 1600.
  • the second output information deriving step (S24) derives the second output information by inputting the image of the answer performed by the evaluator to the one or more in-depth questions into the machine learning model.
  • the machine learning model input in the second output information deriving step S24 may be the same as the machine learning model in the first output information deriving step S21 described above.
  • the configuration of the second output information derived in the step of deriving the second output information (S24) is the same as the configuration of the first output information derived in the step of deriving the first output information (S21), but the second output information is the second output information.
  • the comprehensive evaluation information derivation step (S25) derives comprehensive evaluation information on the specific competency of the evaluated person based on the first output information and the second output information. More specifically, the first output information is derived based on an image of an answer performed by the evaluator to one or more questions related to the specific competency that the evaluator wants to be evaluated in the first question providing step (S20), and the first output information includes information on the derived behavioral indicators for the specific competency that can be observed in the corresponding response image.
  • the second output information is one or more in-depth questions to derive an answer to a behavioral indicator that does not correspond to the derived behavioral indicator included in the first output information among a plurality of behavioral indicators for the specific competency.
  • each derived behavioral indicator included in the first output information and the second output information may include all of a plurality of behavioral indicators for the specific capability, and as a result, the first output information and the second output Based on the information, assessments of specific competencies can be performed.
  • the in-depth question setting step (S22) is performed again to derive an additional in-depth question about the behavioral indicator that does not correspond to the derived behavioral indicator, and similarly, the second question question providing step (S23) and the second output information derivation
  • step S24 it is possible to derive the output information for the response image performed by the evaluator with respect to the behavioral indicators that do not correspond to the derived behavioral indicators, and this iterative process includes one or more output information included in each output information. It can be repeated until the derived behavioral indicator includes all the multiple behavioral indicators for the specific competency.
  • the comprehensive evaluation information derivation step (S25) can be performed immediately without performing the steps related to the in-depth question, and in this case, the comprehensive evaluation information derivation step (S25) based on the first output information can also be derived.
  • the in-depth question setting step (S22) is based on a plurality of behavior indicators set for the specific competency and one or more derived behavior indicators derived through the general question step, the plurality of behaviors It is possible to determine one or more in-depth questions for determining a behavioral indicator not derived by the derived behavioral indicator among the indicators, and for eliciting an answer related to the behavioral indicator that is not derived by the derived behavioral indicator for the evaluated person.
  • the in-depth question setting step (S22) is a step of determining a behavioral indicator that is not included in one or more derived behavioral indicators included in the first output information among a plurality of behavioral indicators corresponding to the specific competency to be evaluated (S30) ) is included.
  • determining only the behavioral indicators not included in the first output information through the above step (S30) it corresponds to a question that can lead to an answer related to the behavioral indicators not included in the first output information in the step (S31) to be described later from the evaluator In-depth questions can be derived.
  • the in-depth question setting step (S22) further includes a step (S31) of setting one or more in-depth questions for behavior indicators not included in the derived behavior indicators.
  • step (S31) one or more in-depth questions related to the behavioral indicators are derived so that behavioral indicators not derived as derived behavioral indicators can be observed in the respondent's answer.
  • the one or more in-depth questions are one or more questions corresponding to a behavioral indicator not derived as a derived behavioral indicator among one or more questions corresponding to each behavioral indicator set in the server system 1000, or a specific question among the one or more questions can be derived as an in-depth question.
  • a deep question In the deep question setting step (S22) performed by the deep question setting unit 1520 shown in FIG. 12, predetermined steps are performed to derive behavioral indicators from the question pool stored for each behavioral indicator in the server system 1000 This corresponds to a method of deriving a specific question as a deep question from a question pool corresponding to a behavioral indicator that does not correspond to , but in another embodiment of the present invention, a deep question can be derived using a machine-learned deep question recommendation model.
  • FIG 13 schematically shows detailed steps of the deep question setting step implemented in another method according to an embodiment of the present invention.
  • the in-depth question setting step based on a plurality of behavior indicators set for the specific competency and one or more derived behavior indicators derived through the general question step, among the plurality of behavior indicators, the It is possible to determine one or more in-depth questions to determine a behavioral indicator that has been derived as a derived behavioral indicator, but does not meet the preset discrimination criteria, as an incomplete behavioral indicator, and allow the evaluator to derive an answer related to the incomplete behavioral indicator.
  • one or more in-depth questions can be set to determine a behavioral indicator that is not derived as a derived behavioral indicator, and to derive an answer to the corresponding behavioral indicator from the person being evaluated. .
  • the in-depth question setting step (S22) is, as shown in FIG. 13, derived as a derived behavioral indicator among a plurality of behavioral indicators set for a specific capability, but is not completely derived, that is, incomplete.
  • One or more in-depth questions may be set for determining the behavioral indicators and for eliciting answers to the incomplete behavioral indicators from the evaluated.
  • the in-depth question setting step (S22) is included in one or more derived behavioral indicators included in the first output information among a plurality of behavioral indicators corresponding to the specific competency to be evaluated, but does not meet the preset discrimination criteria
  • a behavioral indicator is determined (S40).
  • the preset discrimination criterion may correspond to a reference value for determining to what extent the behavioral indicator can be included in the derived behavioral indicator.
  • the specific behavioral indicator is completely derived as the derived behavioral indicator.
  • the specific behavioral indicator is determined as an incomplete derived behavioral indicator, that is, an incomplete behavioral indicator.
  • step S40 by discriminating a behavioral indicator that is included in the first output information through step S40 but does not correspond to a clear behavioral indicator as an incomplete behavioral indicator, an answer related to the incomplete behavioral indicator can be derived from the subject in step S41 to be described later. In-depth questions that correspond to the questions can be derived.
  • step S40 one or more in-depth questions related to the incomplete behavioral indicator are derived so that the incomplete behavioral indicator can be observed in the respondent's answer.
  • the one or more in-depth questions are one or more questions corresponding to incomplete behavior indicators among one or more questions corresponding to each behavioral indicator set in the server system 1000, or a specific question among the one or more questions to be derived as an in-depth question.
  • a deep question In the deep question setting step (S22) performed by the deep question setting unit 1520 shown in FIG. 13, predetermined steps are performed and incomplete behavioral indicators are stored in the question pool stored for each behavioral indicator in the server system 1000.
  • the in-depth question setting step (S22) is a method of setting one or more in-depth questions for a behavioral indicator that is not derived by the derived behavioral indicator described in FIG. 12 and one or more in-depth questions about the incomplete behavioral indicator described in FIG. You can use only one of the setting methods.
  • the deep question setting step (S22) one or more in-depth questions for each of the behavioral indicators and incomplete behavior indicators that are not derived as the derived behavior indicators by using both methods may be set.
  • FIG. 14 schematically illustrates a process of deriving output information by a machine learning model in the capability information derivation unit 1500 according to an embodiment of the present invention.
  • the capability information derivation unit 1500 derives output information by inputting an image of the subject's answer to the machine learning model, and specifically, the first output included in the capability information derivation unit 1500 .
  • the information derivation unit 1512 may derive the first output information by inputting an image of an answer performed by the evaluated person to one or more questions provided by the first question-and-question study 1511 into the machine learning model, and the competency information
  • the second output information derivation unit 1542 included in the derivation unit 1500 inputs the image of the answer performed by the evaluator to the one or more in-depth questions provided by the second question study unit 1541 into the machine learning model, and 2 Output information can be derived.
  • the machine learning model may include various detailed machine learning models that perform evaluation on the answer image performed by the evaluator.
  • the detailed machine learning model corresponds to a detailed machine learning model that can be learned and evaluated based on deep learning, or derives feature information about the corresponding answer image according to a preset routine or algorithm rather than learning, It may correspond to a detailed machine learning model that evaluates feature information.
  • the capability information derivation unit 1500 basically receives an answer image performed by the subject including a plurality of continuous image (image) information and voice information, and machine learning technology such as deep learning The output information is derived through the machine learning model learned through
  • the capability information derivation unit 1500 may additionally analyze the answer image based on a preset rule rather than machine learning, and may derive specific evaluation values.
  • the capability information derivation unit 1500 extracts video and audio information from an answer image including a plurality of consecutive images (images) and audio and inputs them to each detailed machine learning model to derive a result value or It is also possible to synthesize the voice information and input it into the detailed machine learning model to derive the result value.
  • the capability information derivation unit 1500 includes the machine learning model, and derives output information based on the feature information derived from the answer image, or the capability information derivation unit 1500 is a separately provided machine It is also possible to derive output information based on the feature information derived from the answer image by calling the learning model.
  • the first output information derived in the first output information derivation step (S21) is input to the machine learning-based deep question recommendation model as the subject to be evaluated It is possible to derive one or more in-depth questions to elicit answers related to behavioral indicators that are not derived from the derived behavioral indicators.
  • the deep question setting unit 1520 derives an in-depth question by performing predetermined steps as in FIG. 12 described above, or the first output information in the deep question recommendation model as shown in FIG. 15 . It is also possible to derive an in-depth question by entering
  • the in-depth question recommendation model includes in-depth question information included in the above-described first evaluator competency information, and more specifically, the in-depth question information corresponds to an in-depth question input by the evaluator on the deep question layer included in the evaluation interface, and the The deep question recommendation model can perform learning based on deep question information.
  • the deep question recommendation model may learn only the deep question information, but preferably by additionally learning the behavioral indicators related to the deep questions input by the evaluator on the evaluation interface, the unobserved behavioral indicators and the corresponding depth You can also learn to relate to questions.
  • the deep question recommendation model may include various detailed machine learning models for deriving deep questions based on the answer image of the evaluated, and the detailed machine learning model is learned based on deep learning to derive deep questions. It may correspond to a detailed machine learning model, or it may correspond to a detailed machine learning model in which feature information is derived according to a preset routine or algorithm rather than learning, and a deep question is derived based on the derived feature information.
  • the in-depth question setting unit 1520 includes the in-depth question recommendation model, and derives one or more in-depth questions based on the first output information, or the deep question setting unit 1520 is provided separately One or more in-depth questions may be derived based on the first output information by calling the question recommendation model.
  • the deep question recommendation model shown in FIG. 15 is shown as a separate model distinct from the machine learning model shown in FIG. 14, but in another embodiment of the present invention, the deep question recommendation model is included in the machine learning model and may derive one or more in-depth questions by receiving the first output information derived by the detailed machine learning model included in the machine learning model.
  • 16 schematically shows a configuration for deriving output information from a machine learning model by inputting an image of an answer performed by an evaluator to a machine learning model according to an embodiment of the present invention.
  • the first output information derivation step (S21) and the second output information derivation step (S24) performed by the capability information derivation unit 1500 receive the answer image performed by the evaluator as input and select a predetermined The step may be performed to process the answer image, and the processed answer image may be input to the machine learning model to derive output information.
  • the drawings shown in (A), (B) and (C) of FIG. 16 correspond to various embodiments of the configuration of input elements input to the machine learning model by the capability information derivation unit 1500 .
  • the first output information deriving step (S21) and the second output information deriving step (S24) are performed in the answer image performed by the evaluator.
  • Image information and audio information may be separated, and each of the separated image information and audio information may be pre-processed and input to the machine learning model.
  • the first output information derivation step (S21) is to receive an image of an answer performed by the assessee to one or more questions provided to the assessee in the first question and question study 1511, and obtain image information and audio information from the answer image.
  • the second output information derivation step (S24) the answer image performed by the assessee to one or more in-depth questions provided to the assessee in the second question study 1541 is input, and image information and Separate voice information.
  • the capability information derivation unit 1500 includes an image and sound separation module, and the image and sound separation module receives input in the first output information derivation step (S21) and the second output information derivation step (S24).
  • the answer images are divided into video information and audio information.
  • the capability information derivation unit 1500 further includes a pre-processing module, and the pre-processing module pre-processes each of the image information and the audio information.
  • the image information and the audio information are converted into a form suitable for the algorithm of the machine learning model through the pre-processing module, and the performance of the machine learning model can be improved.
  • the preprocessing module processes missing values or features through the data cleaning step for image information and audio information, and encodes them into numeric data through one hot encoding method through the Handling Text and Categorical Attributes step, and Custom Transformers Transform data through the steps, set the range of data through the Feature Scaling step, and automate this process through the Transformation Pipelines step.
  • the steps performed in the preprocessing module are not limited to the steps described above, It can include various preprocessing steps for the machine learning model.
  • the capability information derivation unit 1500 further includes an STT module, and the STT module Speech for the answer image received in the first output information derivation step (S21) and the second output information derivation step (S24)
  • STT Speech to Text
  • the Speech to Text conversion method performed by the STT module may use various existing STT conversion methods.
  • text information may not be derived only by the STT conversion method through the above-described STT module, and the text for the answer image is directly input by the manager of the server system 1000 for the answer image, or the STT module
  • the final text information may be derived by first deriving text information on the answer image through , and correcting the text information by the manager of the server system 1000 or the like.
  • the STT module may receive the audio information of the answer image separated through the video-to-sound separation module, perform STT conversion, and convert the corresponding audio information into text information.
  • the step of deriving the first output information (S21) and the step of deriving the second output information (S24) include performing embedding expressing the derived text information as a vector.
  • the capability information derivation unit 1500 may further include an embedding module, and the embedding module may perform embedding on text information derived based on the answer image.
  • the capability information derivation unit 1500 is a question about the text information derived based on the answer image and the answer image performed by the evaluator.
  • a step of performing embedding expressing the text information of the vector may be performed, and the embedded vector for the corresponding question may correspond to an additional configuration input to the machine learning model. Therefore, the machine learning model can derive more sophisticated output information by considering not only the answer image but also the question about the answer image.
  • the embedding module may express each text information in a vector form using various embedding methods such as One-hot encoding, CountVectorizer, TfidVectorizer, and Word2Vec.
  • the vectors embedded in this way are input to the machine learning model, and the machine learning model is the preprocessed image information and audio information described above. and embedded vectors can be input and output information about the answer image performed by the evaluated can be derived.
  • the capability information derivation unit 1500 is pre-processed image information in the machine learning model, pre-processed audio information, the machine learning model By inputting text information derived based on the answer image performed by the subject to be input and the competency identifier for the corresponding answer image, output information for the corresponding answer image can be derived.
  • the machine learning model shown in (C) of FIG. 16 may correspond to a machine learning-based model capable of performing evaluation of a plurality of competencies rather than performing evaluation of a specific competency.
  • the machine learning model By inputting the competency identifier to the learning model, it is possible to evaluate a specific competency corresponding to the competency identifier. That is, the machine learning model can evaluate each of a plurality of competencies, and a competency identifier capable of identifying a specific competency to be evaluated through the answer image performed by the evaluated person and the corresponding answer image in the machine learning model Output information can be derived by input.
  • Output information can also be derived by inputting a vector embedded with respect to text information derived based on , a vector embedded with respect to text information of a question for the corresponding answer image, and a competency identifier corresponding to the corresponding answer image into the machine learning model.
  • FIG. 17 schematically shows a configuration for setting an in-depth question according to output information derived by inputting an image of an answer performed by a subject to be evaluated into a machine learning model, and deriving comprehensive evaluation information according to an embodiment of the present invention.
  • the first question question study 1511 provides one or more questions related to a specific competency to the evaluated, and accordingly, the first output information derivation unit 1512 provides the The first output information may be derived by processing the image of an answer performed by the evaluator for one or more questions, and inputting the processed image of the answer to the machine learning model.
  • the one or more questions provided by the first question study 1511 may consist of individual questions that do not consider the relationship between the questions in an independent form, but preferably, the one or more questions are mutually A correlation may exist.
  • the first question corresponds to a question asking about the 'situation' of the evaluator's past experiences related to a particular competency
  • the second question is related to the first question in what 'in the context of the first question'.
  • the third question corresponds to a question asking whether an action was taken, and the third question corresponds to a question asking the 'result' of the action in the second question in connection with the first and second questions, so that each question is interconnected. can be configured.
  • the first question-question study unit 1511 provides each question individually, and the first output information derivation unit 1512 provides the first output information for each answer image performed by the subject for each individually provided question.
  • the output information was derived, in another embodiment of the present invention, the first question-and-question study 1511 provides one or more questions to the evaluator at a time, and the first output information derivation unit 1512 responds to one or more questions.
  • the first output information may be derived by dividing the answer image performed by , for each question and inputting it to the machine learning model, or inputting the entire answer image to the machine learning model.
  • the first output information derivation unit 1512 derives the first output information including the behavior index for a specific capability observed in the answer image as the derived behavior index, and the in-depth question setting unit 1520 By performing the in-depth question setting step (S22), one or more in-depth questions are derived based on the first output information.
  • the behavioral indicators related to specific capabilities correspond to behavioral indicators 1 to 5
  • the derived behavioral indicators included in the first output information derived based on the answer image performed by the evaluator to the first question are behavioral indicators 1
  • the derived behavioral indicators included in the first output information derived based on the response image performed by the evaluator to the second question include behavioral indicators 3 and 4, and the answer image performed by the evaluator to the third question
  • the derived behavioral indicator included in the first output information derived based on includes behavioral indicator 4.
  • the in-depth question setting unit 1520 derives an in-depth question related to a behavioral indicator that does not correspond to the derived behavioral indicator included in each of the first output information.
  • each of the first output information does not include the derived behavior indicators corresponding to the behavior indicators 2 and 5, and thus the in-depth question setting unit 1520 provides answers related to the behavior indicators 2 and 5 to the subject to be evaluated.
  • the deep question setting unit 1520 may derive one or more of the preset questions related to the behavioral indicators that do not correspond to the derived behavioral indicators as in-depth questions, or derive a separate deep question through the machine-learned deep question recommendation model. have.
  • the second question study unit 1541 provides the one or more in-depth questions to the evaluated, and accordingly, the second output information derivation unit 1542 As shown in FIG. 16 , may process the image of the answer performed by the evaluator to the one or more in-depth questions, and input the processed image of the answer to the machine learning model to derive second output information.
  • the second output information derivation unit 1542 may derive the second output information including the behavioral indicator for a specific capability observed in the answer image as the derived behavioral indicator.
  • the first output information derived in the first output information derivation step (S21) and the second output information derived in the second output information derivation step (S24) are found for the derived behavioral indicators related to the evaluation information
  • the text information of the answer image performed by the subject corresponding to the probability information and the discovery probability information may be further included.
  • the first output information and the second output information derived through the machine learning model determine whether the answer image contains the relevant answer content for each of one or more derived behavioral indicators corresponding to the answer image performed by the evaluator.
  • the discovery probability information may be included, and the discovery probability information may also be included in the comprehensive evaluation information derived in the general evaluation information derivation step (S25).
  • the evaluator selects a specific area of the script on the script layer, selects a specific behavioral indicator corresponding to the selected specific region on the behavioral indicator list region, and selects a specific behavioral indicator from the response image performed by the first evaluator As in selecting specific answer contents corresponding to can be calculated probabilistically.
  • first output information derivation unit 1512 and the second output information derivation unit 1542 are text information derived based on the response image in the competency information derivation unit 1500 with respect to the response image performed by the evaluated person.
  • specific text information corresponding to the discovery probability information for each of the one or more derived behavior indicators calculated in the machine learning model may be further included in the first output information and the second output information, respectively.
  • the first output information derivation unit 1512 and the second output information derivation unit 1542 are calculated from the machine learning model among text information derived based on the answer image performed by the evaluated person.
  • Specific text information corresponding to the derived behavioral indicator in which the discovery probability information for each of the one or more derived behavioral indicators exceeds a predetermined value may be derived by further including the first output information and the second output information, respectively.
  • specific text information corresponding to the discovery probability information for each of one or more derived behavioral indicators may be derived through the machine learning model.
  • the comprehensive evaluation information derived in the comprehensive evaluation information deriving step (S25) is for each of the derived behavior indicators derived in the first output information deriving step (S21) and the second output information deriving step (S24).
  • the score for the specific competency calculated by synthesizing the discovery probability information may be included.
  • the comprehensive evaluation information derivation step (S25) performed by the comprehensive evaluation information derivation unit 1550 is comprehensive evaluation information that finally evaluates the specific competency of the person to be evaluated based on the first output information and the second output information to derive
  • the comprehensive evaluation information derivation unit 1550 calculates a score for the answer image performed by the evaluator, such as the evaluation score input by the evaluator on the score evaluation layer L3 included in the above-described evaluation interface, and the score is It may be included in the comprehensive evaluation information.
  • the score calculated by the comprehensive evaluation information derivation unit 1550 is calculated as a score among a plurality of scores set at specific intervals within a preset range, such as the evaluation score input by the evaluator on the score evaluation layer L3. can do.
  • the comprehensive evaluation information includes a score obtained by grading the evaluation of a specific competency, it is possible to exhibit the effect of quantifying and providing the degree of retention of the specific competency to the evaluated person.
  • the comprehensive evaluation information derived in the comprehensive evaluation information deriving step (S25) is input in the first output information deriving step (S21) and the second output information deriving step (S24), respectively. It may include the score for the specific competency calculated by synthesizing the pre-processing result information for the answer image of
  • the comprehensive evaluation information derived in the comprehensive evaluation information deriving step (S25) provides at least one answer image and second output information performed by the subject to be evaluated, which is input to the machine learning model in the first output information deriving step (S21).
  • step S24 one or more answer images performed by the evaluator to the in-depth question input to the machine learning model may be inputted into a separate machine learning model, and a score for the specific competency of the subject to be derived may be included.
  • answer images input to the separate machine learning model may be pre-processed through a predetermined step for pre-processing, and answer images may be input to the separate machine learning model.
  • a machine learning model for deriving the first output information and the second output information and a separate machine learning model for deriving the comprehensive evaluation information may be included in a single machine learning model, and the first output information and the second output information Each answer image is input to the single machine learning model in order to derive Comprehensive evaluation information may be derived based on each answer image or the first output information and the second output information derived from the machine learning model.
  • FIG. 18 schematically illustrates a configuration for deriving comprehensive evaluation information by further including feature information derived from a machine learning model to which an image of an answer performed by an evaluated person is input according to an embodiment of the present invention.
  • the comprehensive evaluation information derived in the comprehensive evaluation information derivation step (S25) includes discovery probability information and text information for the derived behavior indicators included in the first output information and the second output information.
  • the basic score information for the corresponding answer image and the score for the specific competency derived based on one or more information among the characteristic information generated to derive the first output information and the second output information from the machine learning model may include
  • FIG. 18 corresponds to a diagram illustrating a process of deriving comprehensive evaluation information based on the answer image performed by the evaluated.
  • an image of an answer to each of a plurality of questions provided to the evaluator is input to the machine learning model, and the machine learning model derives output information corresponding to the capability derivation result for each answer image.
  • the output information may include discovery probability information on the derived behavioral indicator corresponding to the corresponding answer image, text information on the derived behavioral indicator, and basic score information.
  • the basic score information may correspond to a score corresponding to the single answer image differently from the score for a specific competency included in the comprehensive evaluation information.
  • the process of deriving the output information may be performed in the above-described first output information deriving step (S12) and second output information deriving step (S24).
  • the plurality of questions provided to the evaluator include in-depth questions derived from the server system, as described with reference to FIG. 17 .
  • the machine learning model that received the answer image primarily derives feature information about the received answer image in order to derive output information corresponding to the capability deduction result, and derives the output information based on the feature information . Deriving feature information from the machine learning model will be described later with reference to FIG. 19 .
  • comprehensive evaluation information can be derived based on each characteristic information derived from the machine learning model with respect to one or more output information derived and each answer image, and the comprehensive evaluation
  • the information may include scores for specific competencies to be assessed.
  • the output information derived in the first output information deriving step (S21) and the second output information deriving step (S24) and the first Input the feature information derived from the machine learning model used in the output information derivation step (S21) and the second output information derivation step (S24), and the separate machine learning model includes a score for the specific competency of the subject to be evaluated to derive comprehensive evaluation information.
  • the separate machine learning model in the comprehensive evaluation information deriving step (S25) is different from the machine learning model in the above-described first output information deriving step (S21) and second output information deriving step (S24).
  • first output information derivation step (S21), the second output information derivation step (S24) and comprehensive evaluation information derivation In step S25 output information and comprehensive evaluation information may be derived through the entire machine learning model.
  • the separate machine learning model may derive comprehensive evaluation information by performing machine learning of a deep learning method, or may derive comprehensive evaluation information by performing machine learning of an ensemble learning method.
  • output information and feature information are input to the separate machine learning model, or discovery probability information, text information, and basic score information and characteristics of the output information included in the output information.
  • the comprehensive evaluation information may be derived by inputting one or more pieces of information among the information.
  • the comprehensive evaluation information deriving step (S25) in order to derive the comprehensive evaluation information in the comprehensive evaluation information deriving step (S25), by using the feature information derived from the machine learning model as an input element in a separate machine learning model, more accurate It can exert the effect of deriving the results of competency evaluation.
  • FIG. 19 schematically shows the internal configuration of a feature extraction model according to an embodiment of the present invention.
  • the above-described machine learning model may include a feature extraction model and a feature inference model, and the feature extraction model according to the embodiment shown in FIG. a first deep neural network for extracting spatial feature information for deriving image feature information of ; a second deep neural network for extracting spatial feature information for deriving a plurality of voice feature information from the voice information of the answer image performed by the evaluator; a first cyclic neural network module for receiving the plurality of image feature information and deriving first feature information; and a second recurrent neural network module for receiving the plurality of voice feature information and deriving second feature information; a third cyclic neural network module for deriving third characteristic information by converting the voice information of the response image to Speech to Text (STT) or receiving a script input based on the response image from an administrator of the server system 1000, etc.; may include
  • the first deep neural network and the second deep neural network may correspond to a CNN module and the like, and in the embodiment shown in FIG. 19, the first deep neural network corresponds to the first CNN module, and the second deep The neural network may correspond to the second CNN module.
  • the first recurrent neural network module, the second recurrent neural network module, and the third recurrent neural network module may correspond to the LSTM module included in the RNN module, and in the embodiment shown in FIG. 19, the first recurrent neural network module is 1 LSTM module, the second recurrent neural network module may correspond to the second LSTM module, and the third recurrent neural network module may correspond to the third LSTM module.
  • the plurality of frames may be generated by dividing an image of an image at preset time intervals.
  • the plurality of image feature information derived by the first CNN module is preferably input to the first LSTM module in chronological order.
  • the characteristic information (pitch, intensity, etc.) of the voice for a preset time section, or the data of the voice itself, is input to the second CNN module, and the voice feature information derived from the second CNN module is sent to the second LSTM module in time series order. input is preferred.
  • the feature information for the voice may correspond to pitch or intensity, but more preferably, the voice is divided into certain sections, and the spectrum for each section is applied to Mel Filter Bank to extract features through Cepstral analysis.
  • Mel-Frequency Cepstral Coefficient MFCC may be applicable.
  • the script input to the feature extraction model may correspond to a vector in which the corresponding script is embedded in token units.
  • feature information (a vector sequence) corresponding to the output of the feature extraction model is derived based on the first detailed feature information, the second detailed feature information, and the third detailed feature information.
  • the characteristic information may be derived by simply combining the first detailed characteristic information, the second detailed characteristic information, and the third detailed characteristic information, or the first detailed characteristic information and the second detailed characteristic information
  • the characteristic information may be derived by applying a weight or the like to the information and the third detailed characteristic information.
  • the feature inference model applies a weight learned by a plurality of Fully Connected Layers to the feature information derived from the feature extraction model to derive an intermediate result (Representative Vector), and the second The result value for the response image performed by the evaluated is derived.
  • the above-described machine learning model may analyze the response image performed by the evaluated person to derive information on the degree of possessing the competency of the evaluated person for a specific competency corresponding to the corresponding response image.
  • the number of the fully connected layers is not limited to the number shown in FIG. 20, and the feature inference model may include one or more fully connected layers.
  • the intermediate result may be omitted.
  • the feature inference model may be implemented in such a way that it uses a Softmax activation function to handle the problem of classifying according to a preset criterion, or derives a score using a sigmoid activation function, etc. .
  • FIG. 21 schematically illustrates an internal configuration of a computing device according to an embodiment of the present invention.
  • the above-described server system 1000 illustrated in FIG. 1 may include components of the computing device illustrated in FIG. 21 .
  • the computing device 11000 includes at least one processor 11100, a memory 11200, a peripheral interface 11300, an input/output subsystem ( I/O subsystem) 11400 , a power circuit 11500 and a communication circuit 11600 may be included at least.
  • the computing device 11000 may correspond to the server system 1000 illustrated in FIG. 1 or one or more servers included in the server system 1000 .
  • the memory 11200 may include, for example, high-speed random access memory, magnetic disk, SRAM, DRAM, ROM, flash memory, or non-volatile memory. .
  • the memory 11200 may include a software module, an instruction set, or other various data required for the operation of the computing device 11000 .
  • access to the memory 11200 from other components such as the processor 11100 or the peripheral device interface 11300 may be controlled by the processor 11100 .
  • Peripheral interface 11300 may couple input and/or output peripherals of computing device 11000 to processor 11100 and memory 11200 .
  • the processor 11100 may execute a software module or an instruction set stored in the memory 11200 to perform various functions for the computing device 11000 and process data.
  • the input/output subsystem may couple various input/output peripherals to the peripheral interface 11300 .
  • the input/output subsystem may include a controller for coupling a peripheral device such as a monitor or keyboard, mouse, printer, or a touch screen or sensor as required to the peripheral interface 11300 .
  • input/output peripherals may be coupled to peripheral interface 11300 without going through an input/output subsystem.
  • the power circuit 11500 may supply power to all or some of the components of the terminal.
  • the power circuit 11500 may include a power management system, one or more power sources such as batteries or alternating current (AC), a charging system, a power failure detection circuit, a power converter or inverter, a power status indicator, or a power source. It may include any other components for creation, management, and distribution.
  • the communication circuit 11600 may enable communication with another computing device using at least one external port.
  • the communication circuit 11600 may transmit and receive an RF signal, also known as an electromagnetic signal, including an RF circuit, thereby enabling communication with other computing devices.
  • an RF signal also known as an electromagnetic signal, including an RF circuit
  • FIG. 21 is only an example of the computing device 11000, and the computing device 11000 may omit some components shown in FIG. 21, or further include additional components not shown in FIG. 21, or 2 It may have a configuration or arrangement that combines two or more components.
  • a computing device for a communication terminal in a mobile environment may further include a touch screen or a sensor in addition to the components shown in FIG. 21 , and various communication methods (WiFi, 3G, LTE) are provided in the communication circuit 11600 .
  • WiFi, 3G, LTE wireless fidelity
  • 3G, LTE wireless local area network
  • Bluetooth Wireless Fidelity
  • NFC wireless Fidelity
  • Zigbee Zigbee
  • Components that may be included in the computing device 11000 may be implemented as hardware, software, or a combination of both hardware and software including an integrated circuit specialized for one or more signal processing or applications.
  • Methods according to an embodiment of the present invention may be implemented in the form of program instructions that can be executed through various computing devices and recorded in a computer-readable medium.
  • the program according to the present embodiment may be configured as a PC-based program or an application dedicated to a mobile terminal.
  • the application to which the present invention is applied may be installed in the user terminal or the affiliated store terminal through the file provided by the file distribution system.
  • the file distribution system may include a file transmission unit (not shown) that transmits the file in response to a request from a user terminal or an affiliated store terminal.
  • the device described above may be implemented as a hardware component, a software component, and/or a combination of the hardware component and the software component.
  • devices and components described in the embodiments may include, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA). , a programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions, may be implemented using one or more general purpose or special purpose computers.
  • the processing device may execute an operating system (OS) and one or more software applications executed on the operating system.
  • a processing device may also access, store, manipulate, process, and generate data in response to execution of the software.
  • OS operating system
  • a processing device may also access, store, manipulate, process, and generate data in response to execution of the software.
  • the processing device includes a plurality of processing elements and/or a plurality of types of processing elements. It can be seen that can include For example, the processing device may include a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as parallel processors.
  • Software may comprise a computer program, code, instructions, or a combination of one or more thereof, which configures a processing device to operate as desired or is independently or collectively processed You can command the device.
  • the software and/or data may be any kind of machine, component, physical device, virtual equipment, computer storage medium or device, to be interpreted by or to provide instructions or data to the processing device. , or may be permanently or temporarily embody in a transmitted signal wave.
  • the software may be distributed over networked computing devices, and may be stored or executed in a distributed manner. Software and data may be stored in one or more computer-readable recording media.
  • the method according to the embodiment may be implemented in the form of program instructions that can be executed through various computer means and recorded in a computer-readable medium.
  • the computer-readable medium may include program instructions, data files, data structures, etc. alone or in combination.
  • the program instructions recorded on the medium may be specially designed and configured for the embodiment, or may be known and available to those skilled in the art of computer software.
  • Examples of the computer-readable recording medium include magnetic media such as hard disks, floppy disks and magnetic tapes, optical media such as CD-ROMs and DVDs, and magnetic such as floppy disks.
  • - includes magneto-optical media, and hardware devices specially configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
  • Examples of program instructions include not only machine language codes such as those generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like.
  • the hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.
  • the evaluation result is derived based on the image of the respondent's answer through the machine learning model for performing the evaluation of a specific competency, the time and cost required for the evaluation are reduced and the objective evaluation is at the same time It can have an effect that can lead to results.
  • the evaluation interface provided to the evaluator in the step of providing the evaluation interface includes a script layer, and the script layer displays a script according to the evaluator's answer image, so that the evaluator easily recognizes the evaluator's answer can be effective.
  • the behavior index list area for the corresponding question or specific competency is displayed. It is possible to exert the effect that the evaluator can easily select.
  • the evaluation interface includes a behavior indicator layer in which a specific region of a script selected by the evaluator from the script layer and a specific behavior indicator list of a behavior indicator list region selected from the behavior indicator list region is displayed, so that the evaluator It is possible to exert the effect of easily grasping the respondent's answer for each behavioral indicator.
  • the evaluation interface includes an in-depth question layer to which the evaluator receives an in-depth question according to the answer image of the evaluator and a singularity layer to receive special items about the answer image of the evaluator, so that the evaluator In the case of receiving education on the evaluation method, it can exert the effect of comparing in-depth questions and peculiarities written by experts on the evaluation method.
  • the evaluation result is derived by separating the image information and the audio information from the answer image of the evaluated person, and inputting the respective image information and the audio information into the machine learning model, the context in the answer image of the evaluated person And it is possible to exert the effect of deriving an accurate evaluation result by grasping the intent of the answer in detail.
  • the second evaluator capability information derived in the capability information derivation step through the machine learning model includes the discovery probability information for each behavioral indicator, the effect of objectively providing the evaluation result is obtained.
  • the second assessee capability information derived from the capability information derivation step through the machine learning model further includes text information in the respondent's answer image corresponding to the discovery probability information for each behavioral indicator. In other words, it can have the effect of providing concretely the respondent's answer corresponding to the behavioral indicator.
  • an in-depth question is set based on the derived behavioral indicator included in the first output information derived in the first output information derivation step and a plurality of behavioral indicators for specific capabilities, it is not observed without an evaluator It can have the effect of providing in-depth questions that can lead to answers to behavioral indicators that are not yet evaluated.
  • the comprehensive evaluation information derived in the comprehensive evaluation information derivation step includes a score for a specific competency calculated by synthesizing the discovery probability information in the first output information and the second output information, intuitively It can exert the effect of recognizing the evaluation result of the person being evaluated.
  • An embodiment of the present invention may also be implemented in the form of a recording medium including instructions executable by a computer, such as a program module to be executed by a computer.
  • Computer-readable media can be any available media that can be accessed by a computer and includes both volatile and nonvolatile media, removable and non-removable media.
  • Computer-readable media may include both computer storage media and communication media.
  • Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Communication media typically includes computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave, or other transport mechanism, and includes any information delivery media.

Abstract

The present invention relates to a method, a system and a computer-readable medium for deriving in-depth questions for automated evaluation of an interview video by using a machine learning model and, more specifically, to a method, a system, and a computer-readable medium that: present questions for a specific competency to be evaluated to an evaluatee, receive an answer video conducted by the evaluatee, and derive first output information on the basis of the corresponding answer video; on the basis of a plurality of behavioral indicators for the specific competency and one or more derived behavioral indicators included in the first output information, derive in-depth questions that may elicit the evaluatee's answers for incomplete behavioral indicators and behavioral indicators that are not included in the derived behavioral indicators; and receive a video on answers to the in-depth questions from the corresponding evaluatee and finally perform an evaluation of the corresponding specific competency.

Description

기계학습모델을 이용하여 면접영상에 대한 자동화된 평가를 위한 심층질문을 도출하는 방법, 시스템 및 컴퓨터-판독가능 매체Method, system and computer-readable medium for deriving deep questions for automated evaluation of interview images using machine learning model
본 발명은 기계학습모델을 이용하여 면접영상에 대한 자동화된 평가를 위한 심층질문을 도출하는 방법, 시스템 및 컴퓨터-판독가능 매체에 관한 것으로서, 더욱 상세하게는 피평가자에게 평가하고자 하는 특정 역량에 대한 질문을 제시하여 피평가자가 수행한 답변영상을 수신하여 해당 답변영상에 기초하여 제1출력정보를 도출하고, 해당 특정 역량에 대한 복수의 행동지표 및 제1출력정보에 포함된 1 이상의 도출행동지표에 기초하여 도출행동지표에 포함되지 않는 행동지표 및 미완성 행동지표에 대한 피평가자의 답변을 이끌어낼 수 있는 심층질문을 도출하여, 해당 피평가자로부터 심층질문에 대한 답변영상을 수신하여 최종적으로 해당 특정 역량에 대한 평가를 수행하는 방법, 시스템 및 컴퓨터-판독가능 매체에 관한 것이다.The present invention relates to a method, system and computer-readable medium for deriving an in-depth question for automated evaluation of an interview image using a machine learning model, and more particularly, a question about a specific competency to be evaluated to an evaluator to receive the response image performed by the evaluated by presenting and derive the first output information based on the corresponding response image, based on a plurality of behavioral indicators for the specific competency and one or more derived behavioral indicators included in the first output information to draw in-depth questions that can elicit the evaluator's answers to the behavioral indicators and incomplete behavioral indicators not included in the derived behavioral indicators, receive the video answering the in-depth questions from the evaluator, and finally evaluate the specific competency It relates to a method, a system and a computer-readable medium for doing so.
최근 기업에는 4차 산업 기술 등으로 인해 복잡한 업무를 처리해야 하는 직무들이 증가하고 있으며, 인건비의 상승, 경영환경의 어려움 등으로 인해 기업에서는 인재를 채용하는데 있어 채용하고자 하는 직무에 가장 적합한 인재를 선발하기 위한 다양한 채용 프로세스들을 강구하고 있다.Recently, the number of jobs that require complex tasks due to the 4th industrial technology is increasing, and due to the rise in labor costs and difficulties in the business environment, companies select the most suitable talent for the job they want to hire in hiring talent. We are working on a variety of recruitment processes to do this.
이와 같은 채용 프로세스의 일환으로 공기업과 같이 공공기관에서는 국가직무능력표준(National Competency Standards, NCS)을 기반으로 하여 지원자의 능력과 역량을 검증하는 채용 프로세스를 시행하고 있으며, 사기업에서는 지원자가 해당 직무에 적합한 다양한 역량을 보유하였는지 판단하기 위한 평가방법으로 평가센터(Assessment Center), 표본작업평가(Work Sample Test), 능력평가(Ability Test), Modern Personality Test, 전기 자료(Bio-Data), 평판 조회(References Check), 전통적인 방식의 면접(Traditional Interviews) 등의 평가방법을 채용 프로세스에 반영하고 있다.As part of this recruitment process, public institutions such as public corporations implement a recruitment process that verifies applicants' abilities and competencies based on the National Competency Standards (NCS). As an evaluation method to determine whether you have various competencies, Assessment Center, Work Sample Test, Ability Test, Modern Personality Test, Bio-Data, Reputation Inquiry (References) Check) and traditional interviews are reflected in the hiring process.
상기와 같이 역량을 평가하기 위한 종래의 방법들은 해당 평가 방법에 대하여 전문화된 교육을 받았거나, 풍부한 경험을 보유한 평가자에 의해 수행되어야 하며, 평가를 진행하는 기업측에서는 관련 전문가를 양성하거나 혹은 전문가를 고용하기 위해서 많은 비용이 소요되고, 전문가를 통해 평가를 진행하는 경우에도 평가를 위한 세부적인 절차들을 평가자가 직접 수행하기 때문에 평가를 수행하는 데 있어 상당한 시간이 소요되는 문제점이 존재한다.As described above, the conventional methods for evaluating competency must be performed by an evaluator who has received specialized education on the evaluation method or has abundant experience. There is a problem in that it takes a lot of time to perform the evaluation, because it costs a lot of money, and the evaluator directly performs detailed procedures for evaluation even when the evaluation is conducted by an expert.
더불어, 종래의 방법에서는 피평가자의 역량을 평가하기 위하여 평가자는 피평가자의 답변에서 해당 역량에 관련된 내용이 없는 것으로 판단하는 경우에, 피평가자가 해당 내용에 대한 답변을 할 수 있도록 관련 질문을 창출하여 다시금 질문을 제시하는 과정을 수행하며, 이와 같이 질문을 다시 설정하여 제시하고 이에 대한 답변을 분석하는 추가적인 과정에 의해 평가에 더욱더 많은 시간을 필요로 하게 된다.In addition, in the conventional method, when the evaluator determines that there is no content related to the competency in the respondent's answer in order to evaluate the competency of the evaluator, the evaluator creates a related question so that the evaluator can answer the relevant content and asks the question again This process requires more time for evaluation due to the additional process of presenting the question again and analyzing the answer.
따라서, 온라인 상에서 기계학습모델을 통해 피평가자가 보유한 역량을 평가할 수 있는 방법을 구현하여 평가에 소요되는 시간 및 비용을 크게 절약함과 동시에 평가 결과의 객관성을 향상시킬 수 있는 평가방법의 개발이 필요한 상황이다.Therefore, it is necessary to develop an evaluation method that can greatly reduce the time and cost required for evaluation by implementing a method that can evaluate the competency possessed by the evaluator through an online machine learning model, and at the same time improve the objectivity of the evaluation result. to be.
본 발명은 기계학습모델을 이용하여 면접영상에 대한 자동화된 평가를 위한 심층질문을 도출하는 방법, 시스템 및 컴퓨터-판독가능 매체에 관한 것으로서, 더욱 상세하게는 피평가자에게 평가하고자 하는 특정 역량에 대한 질문을 제시하여 피평가자가 수행한 답변영상을 수신하여 해당 답변영상에 기초하여 제1출력정보를 도출하고, 해당 특정 역량에 대한 복수의 행동지표 및 제1출력정보에 포함된 1 이상의 도출행동지표에 기초하여 도출행동지표에 포함되지 않는 행동지표 및 미완성 행동지표에 대한 피평가자의 답변을 이끌어낼 수 있는 심층질문을 도출하여, 해당 피평가자로부터 심층질문에 대한 답변영상을 수신하여 최종적으로 해당 특정 역량에 대한 평가를 수행하는 방법, 시스템 및 컴퓨터-판독가능 매체를 제공하는 것을 목적으로 한다.The present invention relates to a method, system and computer-readable medium for deriving an in-depth question for automated evaluation of an interview image using a machine learning model, and more particularly, a question about a specific competency to be evaluated to an evaluator to receive the response image performed by the evaluated by presenting and derive the first output information based on the corresponding response image, based on a plurality of behavioral indicators for the specific competency and one or more derived behavioral indicators included in the first output information to draw in-depth questions that can elicit the evaluator's answers to the behavioral indicators and incomplete behavioral indicators not included in the derived behavioral indicators, receive the video answering the in-depth questions from the evaluator, and finally evaluate the specific competency An object of the present invention is to provide a method, a system and a computer-readable medium for performing the
상기와 같은 과제를 해결하기 위하여, 본 발명의 일 실시예는, 서버시스템에서 수행되는 행동지표에 기반한 피평가자의 자동화된 평가방법으로서, 상기 서버시스템에는 특정 역량에 대하여 복수의 행동지표 및 복수의 질문이 기설정되어 있고, 상기 복수의 행동지표 각각은 상기 복수의 질문 중 1 이상과 연관성을 갖는 것을 특징으로 하고, 상기 자동화된 평가방법은, 상기 특정 역량에 대한 평가를 수행하기 위한 기설정된 질문 중 1 이상을 피평가자에게 제공하는 제1질문제공단계 및 상기 제1질문제공단계에서 제공하는 1 이상의 질문에 대하여 상기 피평가자가 수행한 답변영상을 기계학습모델에 입력하여 상기 피평가자의 상기 특정 역량에 대한 평가정보 및 상기 평가정보와 관련된 도출행동지표를 포함하는 제1출력정보를 도출하는 제1출력정보도출단계를 포함하는 일반질문단계; 상기 일반질문단계가 1회 이상 수행된 후에, 도출된 1 이상의 상기 도출행동지표에 기초하여 1 이상의 심층질문을 설정하는 심층질문설정단계; 및 상기 심층질문에 대하여 상기 피평가자 수행한 답변영상 및 상기 제1출력정보도출단계에서 도출된 제1출력정보에 기초하여 상기 특정 역량에 대한 평가를 수행하는 역량평가단계;를 포함하는, 자동화된 평가방법을 제공한다.In order to solve the above problems, an embodiment of the present invention provides an automated evaluation method of an evaluator based on a behavioral indicator performed in a server system, wherein the server system includes a plurality of behavioral indicators and a plurality of questions for a specific capability. This is preset, and each of the plurality of behavior indicators is characterized in that it has a correlation with one or more of the plurality of questions, and the automated evaluation method includes: The first question providing step of providing one or more to the evaluated, and the evaluation of the specific competency of the evaluated by inputting the image of the answer performed by the evaluated person to the one or more questions provided in the first question providing step into the machine learning model a general question step including a first output information derivation step of deriving first output information including information and a derived behavioral indicator related to the evaluation information; an in-depth question setting step of setting one or more in-depth questions based on the one or more derived behavior indicators after the general question step is performed one or more times; and a competency evaluation step of performing an evaluation of the specific competency based on the answer image performed by the evaluator to the in-depth question and the first output information derived in the first output information derivation step; including, automated evaluation provide a way
본 발명의 일 실시예에서는, 상기 역량평가단계는, 상기 심층질문설정단계에서 설정된 심층질문 중 1 이상을 상기 피평가자에게 제공하는 제2질문제공단계 및 상기 제2질문제공단계에서 제공된 1 이상의 심층질문에 대하여 상기 피평가자가 수행한 답변영상을 상기 기계학습모델에 입력하여 상기 피평가자의 상기 특정 역량에 대한 평가정보 및 상기 평가정보와 관련된 도출행동지표를 포함하는 제2출력정보를 도출하는 제2출력정보도출단계를 포함하는 심층질문단계; 및 상기 제1출력정보 및 상기 제2출력정보에 기초하여 상기 피평가자의 상기 특정 역량에 대한 종합평가정보를 도출하는 종합평가정보도출단계;를 포함할 수 있다.In an embodiment of the present invention, the competency evaluation step includes a second question providing step of providing one or more of the in-depth questions set in the in-depth question setting step to the evaluated subject, and one or more in-depth questions provided in the second question providing step Second output information for deriving second output information including evaluation information on the specific competency of the evaluated person and a derived behavioral indicator related to the evaluation information by inputting the image of the answer performed by the evaluated person to the machine learning model an in-depth question step including a derivation step; and a comprehensive evaluation information derivation step of deriving comprehensive evaluation information for the specific competency of the evaluated person based on the first output information and the second output information.
본 발명의 일 실시예에서는, 상기 심층질문설정단계는, 상기 특정 역량에 대하여 설정된 복수의 행동지표 및 상기 일반질문단계를 통해 도출된 1 이상의 상기 도출행동지표에 기초하여 상기 복수의 행동지표 가운데 상기 도출행동지표로 도출되지 않은 행동지표를 판별하고, 상기 피평가자로 하여금 상기 도출행동지표로 도출되지 않은 행동지표와 관련된 답변을 이끌어내기 위한 1 이상의 심층질문을 설정할 수 있다.In an embodiment of the present invention, the in-depth question setting step includes a plurality of behavior indicators set for the specific competency and one or more derived behavior indicators derived through the general question step among the plurality of behavior indicators. It is possible to determine one or more in-depth questions to determine a behavioral indicator that is not derived by the derived behavioral indicator, and to elicit an answer related to the behavioral indicator that is not derived from the derived behavioral indicator by the evaluated person.
본 발명의 일 실시예에서는, 상기 심층질문설정단계는, 상기 특정 역량에 대하여 설정된 복수의 행동지표 및 상기 일반질문단계를 통해 도출된 1 이상의 상기 도출행동지표에 기초하여 상기 복수의 행동지표 가운데 상기 도출행동지표로 도출되었으나, 기설정된 판별기준을 충족하지 못하는 행동지표를 미완성 행동지표로 판별하고, 상기 피평가자로 하여금 상기 미완성 행동지표와 관련된 답변을 이끌어내기 위한 1 이상의 심층질문을 설정할 수 있다.In an embodiment of the present invention, the in-depth question setting step includes a plurality of behavior indicators set for the specific competency and one or more derived behavior indicators derived through the general question step among the plurality of behavior indicators. It is possible to determine one or more in-depth questions to determine a behavioral indicator that has been derived as a derived behavioral indicator, but does not meet the preset discrimination criteria, as an incomplete behavioral indicator, and allow the evaluator to derive an answer related to the incomplete behavioral indicator.
본 발명의 일 실시예에서는, 상기 심층질문설정단계는, 상기 제1출력정보도출단계에서 도출된 제1출력정보를 기계학습 기반의 심층질문추천모델에 입력하여 상기 피평가자로 하여금 상기 도출행동지표로 도출되지 않은 행동지표와 관련된 답변을 이끌어내기 위한 1 이상의 심층질문을 도출할 수 있다.In an embodiment of the present invention, in the deep question setting step, the first output information derived in the first output information derivation step is input to a machine learning-based deep question recommendation model to allow the evaluator to use the derived behavior index as the derived behavior index. One or more in-depth questions can be derived to elicit answers related to behavioral indicators that have not been derived.
본 발명의 일 실시예에서는, 상기 제1출력정보도출단계 및 상기 제2출력정보도출단계는, 상기 피평가자가 수행한 답변영상에서 영상정보 및 음성정보를 분리하고, 분리된 영상정보 및 음성정보 각각을 전처리하여 상기 기계학습모델에 입력할 수 있다.In an embodiment of the present invention, the step of deriving the first output information and the step of deriving the second output information includes separating image information and audio information from the answer image performed by the evaluator, and each of the separated image information and audio information can be pre-processed and input to the machine learning model.
본 발명의 일 실시예에서는, 상기 제1출력정보도출단계 및 상기 제2출력정보도출단계는, 상기 피평가자가 수행한 답변영상에 기초하여 텍스트정보를 도출하는 단계; 상기 도출된 텍스트정보를 벡터로 표현하는 임베딩을 수행하는 단계; 및 상기 임베딩된 벡터를 상기 기계학습모델에 입력하는 단계;를 포함할 수 있다.In an embodiment of the present invention, the step of deriving the first output information and the step of deriving the second output information may include: deriving text information based on the answer image performed by the evaluator; performing embedding expressing the derived text information as a vector; and inputting the embedded vector into the machine learning model.
본 발명의 일 실시예에서는, 상기 제1출력정보도출단계에서 도출하는 제1출력정보 및 상기 제2출력정보도출단계에서 도출하는 제2출력정보는, 상기 평가정보와 관련된 도출행동지표에 대한 발견확률정보 및 상기 발견확률정보에 상응하는 상기 피평가자가 수행한 답변영상의 텍스트정보를 더 포함할 수 있다.In an embodiment of the present invention, the first output information derived in the first output information derivation step and the second output information derived in the second output information derivation step are the discovery of the derivation behavior index related to the evaluation information The text information of the answer image performed by the subject corresponding to the probability information and the discovery probability information may be further included.
본 발명의 일 실시예에서는, 상기 종합평가정보도출단계에서 도출하는 종합평가정보는, 상기 제1출력정보도출단계 및 상기 제2출력정보도출단계에서 도출된 각각의 도출행동지표에 대한 발견확률정보를 종합하여 산출된 상기 특정 역량에 대한 스코어를 포함할 수 있다.In one embodiment of the present invention, the comprehensive evaluation information derived in the comprehensive evaluation information deriving step is the discovery probability information for each of the derived behavior indicators derived in the first output information deriving step and the second output information deriving step. It may include a score for the specific competency calculated by synthesizing them.
본 발명의 일 실시예에서는, 상기 종합평가정보도출단계에서 도출하는 종합평가정보는, 상기 제1출력정보 및 상기 제2출력정보에 포함된 도출행동지표에 대한 발견확률정보, 텍스트정보, 해당 답변영상에 대한 기초스코어정보 및 상기 기계학습모델에서 상기 제1출력정보 및 상기 제2출력정보를 도출하기 위하여 생성된 특징정보 가운데 1 이상의 정보에 기초하여 도출된 상기 특정 역량에 대한 스코어를 포함할 수 있다.In an embodiment of the present invention, the comprehensive evaluation information derived in the step of deriving the comprehensive evaluation information is the discovery probability information, text information, and corresponding answer for the derived behavioral indicators included in the first output information and the second output information. It may include a score for the specific competency derived based on one or more information among the basic score information for the image and the feature information generated to derive the first output information and the second output information from the machine learning model. have.
본 발명의 일 실시예에서는, 상기 종합평가정보도출단계에서 도출하는 종합평가정보는, 상기 제1출력정보도출단계 및 상기 제2출력정보도출단계에서 입력하는 각각의 답변영상에 대해 전처리한 결과 정보를 종합하여 산출된 상기 특정 역량에 대한 스코어를 포함할 수 있다.In one embodiment of the present invention, the comprehensive evaluation information derived in the comprehensive evaluation information deriving step is the result information of pre-processing for each answer image input in the first output information deriving step and the second output information deriving step It may include a score for the specific competency calculated by synthesizing them.
상기와 같은 과제를 해결하기 위하여, 본 발명의 일 실시예에서는, 행동지표에 기반한 피평가자의 자동화된 평가방법을 수행하는 서버시스템으로서, 상기 서버시스템에는 특정 역량에 대하여 복수의 행동지표 및 복수의 질문이 기설정되어 있고, 상기 복수의 행동지표 각각은 상기 복수의 질문 중 1 이상과 연관성을 갖는 것을 특징으로 하고, 상기 특정 역량에 대한 평가를 수행하기 위한 기설정된 질문 중 1 이상을 피평가자에게 제공하는 제1질문제공부 및 상기 제1질문제공단계에서 제공하는 1 이상의 질문에 대하여 상기 피평가자가 수행한 답변영상을 기계학습모델에 입력하여 상기 피평가자의 상기 특정 역량에 대한 평가정보 및 상기 평가정보와 관련된 도출행동지표를 포함하는 제1출력정보를 도출하는 제1출력정보도출부를 포함하는 일반질문부; 상기 일반질문부가 1회 이상 동작된 후에, 도출된 1 이상의 상기 도출행동지표에 기초하여 1 이상의 심층질문을 설정하는 심층질문설정부; 및 상기 심층질문에 대하여 상기 피평가자 수행한 답변영상 및 상기 제1출력정보도출부에서 도출된 제1출력정보에 기초하여 상기 특정 역량에 대한 평가를 수행하는 역량평가부;를 포함하는, 서버시스템을 제공한다.In order to solve the above problems, in one embodiment of the present invention, as a server system for performing an automated evaluation method of a person to be evaluated based on a behavior indicator, the server system includes a plurality of behavior indicators and a plurality of questions for a specific capability. This is preset, and each of the plurality of behavioral indicators is characterized in that it has a correlation with one or more of the plurality of questions, and provides one or more of the preset questions for performing the evaluation of the specific competency to the evaluator Studying the first question and answering the one or more questions provided in the first question-providing step are inputted into the machine learning model, and the evaluation information for the specific competency of the evaluated person and the evaluation information are related a general question unit including a first output information derivation unit for deriving first output information including a derivation behavior indicator; an in-depth question setting unit configured to set one or more in-depth questions based on the derived one or more derived behavior indicators after the general question unit is operated one or more times; and a competency evaluation unit that evaluates the specific competency based on the first output information derived from the image and the first output information derived from the answer image performed by the evaluator to the in-depth question; a server system including a to provide.
상기와 같은 과제를 해결하기 위하여, 본 발명의 일 실시예에서는, 1 이상의 프로세서 및 1 이상의 메모리를 갖는 컴퓨팅장치에서 수행되는 행동지표에 기반한 피평가자의 자동화된 평가방법을 구현하기 위한, 컴퓨터-판독가능 매체로서, 상기 서버시스템에는 특정 역량에 대하여 복수의 행동지표 및 복수의 질문이 기설정되어 있고, 상기 복수의 행동지표 각각은 상기 복수의 질문 중 1 이상과 연관성을 갖는 것을 특징으로 하고, 상기 자동화된 평가방법은, 상기 특정 역량에 대한 평가를 수행하기 위한 기설정된 질문 중 1 이상을 피평가자에게 제공하는 제1질문제공단계 및 상기 제1질문제공단계에서 제공하는 1 이상의 질문에 대하여 상기 피평가자가 수행한 답변영상을 기계학습모델에 입력하여 상기 피평가자의 상기 특정 역량에 대한 평가정보 및 상기 평가정보와 관련된 도출행동지표를 포함하는 제1출력정보를 도출하는 제1출력정보도출단계를 포함하는 일반질문단계; 상기 일반질문단계가 1회 이상 수행된 후에, 도출된 1 이상의 상기 도출행동지표에 기초하여 1 이상의 심층질문을 설정하는 심층질문설정단계; 및 상기 심층질문에 대하여 상기 피평가자 수행한 답변영상 및 상기 제1출력정보도출단계에서 도출된 제1출력정보에 기초하여 상기 특정 역량에 대한 평가를 수행하는 역량평가단계;를 포함하는, 컴퓨터-판독가능 매체를 제공한다.In order to solve the above problems, in an embodiment of the present invention, computer-readable for implementing an automated evaluation method of an appraiseee based on a behavioral indicator performed in a computing device having one or more processors and one or more memories As a medium, a plurality of behavior indicators and a plurality of questions are preset for a specific capability in the server system, each of the plurality of behavior indicators is characterized in that it has a correlation with at least one of the plurality of questions, and the automation In the evaluation method, the first question-providing step of providing one or more of the preset questions for performing the evaluation of the specific competency to the assessee, and the one or more questions provided in the first question-providing step are performed by the assessee A general question including a first output information derivation step of inputting an answer image into a machine learning model and deriving first output information including evaluation information on the specific competency of the evaluated person and a derived behavioral indicator related to the evaluation information step; an in-depth question setting step of setting one or more in-depth questions based on the one or more derived behavior indicators after the general question step is performed one or more times; and a competency evaluation step of performing an evaluation of the specific competency based on the image of the answer performed by the evaluator to the in-depth question and the first output information derived in the first output information derivation step; Computer-reading including; Provide available media.
본 발명의 일 실시예에 따르면, 특정 역량에 대한 평가를 수행하기 위한 기계학습모델을 통해 피평가자의 답변영상에 기초하여 평가결과를 도출하므로, 평가에 소요되는 시간 및 비용을 절감함과 동시에 객관적인 평가결과를 도출할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, since the evaluation result is derived based on the image of the respondent's answer through the machine learning model for performing the evaluation of a specific competency, the time and cost required for the evaluation are reduced and the objective evaluation is at the same time It can have an effect that can lead to results.
본 발명의 일 실시예에 따르면, 평가인터페이스제공단계에서 평가자에게 제공되는 평가인터페이스는 스크립트레이어를 포함하고, 스크립트레이어에는 피평가자의 답변영상에 따른 스크립트가 표시되어, 평가자가 피평가자의 답변을 용이하게 인지할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, the evaluation interface provided to the evaluator in the step of providing the evaluation interface includes a script layer, and the script layer displays a script according to the evaluator's answer image, so that the evaluator easily recognizes the evaluator's answer can be effective.
본 발명의 일 실시예에 따르면, 스크립트레이어는 평가자가 스크립트의 특정 영역을 선택하는 경우, 해당 질문 혹은 특정 역량에 대한 행동지표리스트영역이 표시되므로, 피평가자가 선택한 특정 영역에 대해 상응하는 행동지표를 평가자가 용이하게 선택할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, in the script layer, when the evaluator selects a specific area of the script, the behavior index list area for the corresponding question or specific competency is displayed. It is possible to exert the effect that the evaluator can easily select.
본 발명의 일 실시예에 따르면, 평가인터페이스는 평가자가 스크립트레이어에서 선택한 스크립트의 특정 영역 및 행동지표리스트영역에서 선택한 행동지표리스트영역의 특정 행동지표리스트가 표시되는 행동지표레이어를 포함하므로, 평가자가 각 행동지표 별 피평가자의 답변을 용이하게 파악할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, the evaluation interface includes a behavior indicator layer in which a specific region of a script selected by the evaluator from the script layer and a specific behavior indicator list of a behavior indicator list region selected from the behavior indicator list region is displayed, so that the evaluator It is possible to exert the effect of easily grasping the respondent's answer for each behavioral indicator.
본 발명의 일 실시예에 따르면, 평가인터페이스는 평가자로 하여금 피평가자의 답변영상에 따른 심층질문을 입력받는 심층질문레이어 및 피평가자의 답변영상에 대한 특이사항을 입력받는 특이사항레이어를 포함하므로, 평가자가 해당 평가방법에 대한 교육을 받는 경우, 해당 평가방법에 대한 전문가가 작성한 심층질문 및 특이사항을 비교해볼 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, the evaluation interface includes an in-depth question layer to which the evaluator receives an in-depth question according to the answer image of the evaluator and a singularity layer to receive special items about the answer image of the evaluator, so that the evaluator In the case of receiving education on the evaluation method, it can exert the effect of comparing in-depth questions and peculiarities written by experts on the evaluation method.
본 발명의 일 실시예에 따르면, 피평가자의 답변영상에서 영상정보 및 음성정보를 분리하여, 각각의 영상정보 및 음성정보를 기계학습모델에 입력하여 평가결과를 도출하므로, 피평가자의 답변영상에서의 맥락 및 답변 의도를 세부적으로 파악하여 정확한 평가결과를 도출할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, since the evaluation result is derived by separating the image information and the audio information from the answer image of the evaluated person, and inputting the respective image information and the audio information into the machine learning model, the context in the answer image of the evaluated person And it is possible to exert the effect of deriving an accurate evaluation result by grasping the intent of the answer in detail.
본 발명의 일 실시예에 따르면, 기계학습모델을 통해 역량정보도출단계에서 도출하는 제2피평가자역량정보는 행동지표 각각에 대한 발견확률정보를 포함하므로, 평가결과를 객관적으로 제공할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, since the second evaluator capability information derived in the capability information derivation step through the machine learning model includes the discovery probability information for each behavioral indicator, the effect of objectively providing the evaluation result is obtained. can perform
본 발명의 일 실시예에 따르면, 기계학습모델을 통해 역량정보도출단계에서 도출하는 제2피평가자역량정보는 행동지표 각각에 대한 발견확률정보에 상응하는 피평가자의 답변영상에서의 텍스트정보를 더 포함하므로, 행동지표에 상응하는 피평가자의 답변을 구체적으로 제공할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, the second assessee capability information derived from the capability information derivation step through the machine learning model further includes text information in the respondent's answer image corresponding to the discovery probability information for each behavioral indicator. In other words, it can have the effect of providing concretely the respondent's answer corresponding to the behavioral indicator.
본 발명의 일 실시예에 따르면, 제1출력정보도출단계에서 도출한 제1출력정보에 포함된 도출행동지표 및 특정 역량에 대한 복수의 행동지표에 기초하여 심층질문을 설정하므로, 평가자 없이도 관찰되지 않은 행동지표에 대한 답변을 이끌어낼 수 있는 심층질문을 피평가자에게 제공할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, since an in-depth question is set based on the derived behavioral indicator included in the first output information derived in the first output information derivation step and a plurality of behavioral indicators for specific capabilities, it is not observed without an evaluator It can have the effect of providing in-depth questions that can lead to answers to behavioral indicators that are not yet evaluated.
본 발명의 일 실시예에 따르면, 역량평가단계에서 제1출력정보 및 심층질문에 대한 피평가자가 수행한 답변영상을 추가적으로 분석하여 도출된 제2출력정보에 기초하여 종합평가정보를 도출하므로, 더욱 정확한 평가결과를 도출할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, since comprehensive evaluation information is derived based on the second output information derived by additionally analyzing the first output information and the answer image performed by the evaluator to the in-depth question in the competency evaluation step, more accurate It can exert the effect of deriving evaluation results.
본 발명의 일 실시예에 따르면, 종합평가정보도출단계에서 도출하는 종합평가정보에는 제1출력정보 및 제2출력정보에서의 발견확률정보를 종합하여 산출된 특정 역량에 대한 스코어를 포함하므로 직관적으로 피평가자의 평가결과를 인지할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, since the comprehensive evaluation information derived in the comprehensive evaluation information derivation step includes a score for a specific competency calculated by synthesizing the discovery probability information in the first output information and the second output information, intuitively It can exert the effect of recognizing the evaluation result of the person being evaluated.
도 1은 본 발명의 일 실시예에 따른 기계학습모델을 이용하여 면접영상에 대한 자동화된 평가를 제공하는 방법을 수행하기 위한 전체적인 시스템의 형태를 개략적으로 도시한다.1 schematically shows the form of an overall system for performing a method for providing an automated evaluation of an interview image using a machine learning model according to an embodiment of the present invention.
도 2는 본 발명의 일 실시예에 따른 서버시스템의 내부 구성을 개략적으로 도시한다.2 schematically shows an internal configuration of a server system according to an embodiment of the present invention.
도 3은 본 발명의 일 실시예에 따른 평가하고자 하는 특정 역량에 따라 설정된 행동지표 및 이에 따라 피평가자에게 제공되는 질문의 구성을 개략적으로 도시한다.3 schematically illustrates a configuration of a behavioral index set according to a specific competency to be evaluated and a question provided to a subject to be evaluated according to an exemplary embodiment of the present invention.
도 4는 본 발명의 일 실시예에 따른 서버시스템에서 수행하는 기계학습모델을 이용하여 면접영상에 대한 자동화된 평가를 제공하는 방법을 개략적으로 도시한다.4 schematically illustrates a method of providing an automated evaluation of an interview image using a machine learning model performed in a server system according to an embodiment of the present invention.
도 5는 본 발명의 일 실시예에 따른 피평가자가 질문에 따른 답변을 수행하는 화면을 개략적으로 도시한다.5 schematically illustrates a screen in which an evaluator performs an answer to a question according to an embodiment of the present invention.
도 6은 본 발명의 일 실시예에 따른 평가인터페이스의 구성을 개략적으로 도시한다.6 schematically shows the configuration of an evaluation interface according to an embodiment of the present invention.
도 7은 본 발명의 일 실시예에 따른 스크립트레이어에서 평가자의 선택에 따라 행동지표레이어가 표시되는 구성을 개략적으로 도시한다.7 schematically shows a configuration in which a behavior indicator layer is displayed according to the selection of an evaluator in the script layer according to an embodiment of the present invention.
도 8은 본 발명의 일 실시예에 따른 다른 형태의 평가인터페이스의 구성을 개략적으로 도시한다.8 schematically shows the configuration of another type of evaluation interface according to an embodiment of the present invention.
도 9는 본 발명의 일 실시예에 따른 모델학습단계에 따라 기계학습모델이 학습되는 과정을 개략적으로 도시한다.9 schematically illustrates a process of learning a machine learning model according to the model learning step according to an embodiment of the present invention.
도 10은 본 발명의 일 실시예에 따른 역량정보도출부의 세부 구성을 개략적으로 도시한다.10 schematically shows a detailed configuration of a capability information derivation unit according to an embodiment of the present invention.
도 11은 본 발명의 일 실시예에 따른 서버시스템에서 수행하는 면접영상에 대한 자동화된 평가를 위한 심층질문을 도출하는 방법을 개략적으로 도시한다.11 schematically illustrates a method of deriving an in-depth question for automated evaluation of an interview image performed in a server system according to an embodiment of the present invention.
도 12는 본 발명의 일 실시예에 따른 심층질문설정단계의 세부 단계들을 개략적으로 도시한다.12 schematically shows detailed steps of the in-depth question setting step according to an embodiment of the present invention.
도 13은 본 발명의 일 실시예에 따른 다른 방법으로 구현되는 심층질문설정단계의 세부 단계들을 개략적으로 도시한다.13 schematically shows detailed steps of the deep question setting step implemented in another method according to an embodiment of the present invention.
도 14는 본 발명의 일 실시예에 따른 역량정보도출부에서 기계학습모델에 의해 출력정보를 도출하는 과정을 개략적으로 도시한다.14 schematically illustrates a process of deriving output information by a machine learning model in the capability information derivation unit according to an embodiment of the present invention.
도 15는 본 발명의 일 실시예에 따른 심층질문설정단계에서 심층질문추천모델에 의해 심층질문을 도출하는 과정을 개략적으로 도시한다.15 schematically shows a process of deriving an in-depth question by the deep-question recommendation model in the deep-question setting step according to an embodiment of the present invention.
도 16은 본 발명의 일 실시예에 따른 피평가자가 수행한 답변영상을 기계학습모델에 입력하여, 기계학습모델에서 출력정보를 도출하는 구성을 개략적으로 도시한다.16 schematically shows a configuration for deriving output information from a machine learning model by inputting an image of an answer performed by an evaluator to a machine learning model according to an embodiment of the present invention.
도 17은 본 발명의 일 실시예에 따른 피평가자가 수행한 답변영상을 기계학습모델에 입력하여 도출된 출력정보에 따라 심층질문을 설정하고, 이에 따라 종합평가정보를 도출하는 구성을 개략적으로 도시한다.17 schematically shows a configuration for setting an in-depth question according to output information derived by inputting an image of an answer performed by a subject to be evaluated into a machine learning model, and deriving comprehensive evaluation information according to an embodiment of the present invention. .
도 18은 본 발명의 일 실시예에 따른 피평가자가 수행한 답변영상을 입력받은 기계학습모델에서 도출된 특징정보를 더 포함하여 종합평가정보를 도출하는 구성을 개략적으로 도시한다.18 schematically illustrates a configuration for deriving comprehensive evaluation information by further including feature information derived from a machine learning model to which an image of an answer performed by an evaluated person is input according to an embodiment of the present invention.
도 19는 본 발명의 일 실시예에 따른 피처추출모델의 내부 구성을 개략적으로 도시한다.19 schematically shows the internal configuration of a feature extraction model according to an embodiment of the present invention.
도 20은 본 발명의 일 실시예에 따른 피처추론모델의 내부 구성을 개략적으로 도시한다.20 schematically shows an internal configuration of a feature inference model according to an embodiment of the present invention.
도 21은 본 발명의 일 실시예에 따른 컴퓨팅장치의 내부 구성을 개략적으로 도시한다.21 schematically illustrates an internal configuration of a computing device according to an embodiment of the present invention.
아래에서는 첨부한 도면을 참조하여 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자가 용이하게 실시할 수 있도록 본 발명의 실시예를 상세히 설명한다. 그러나 본 발명은 여러 가지 상이한 형태로 구현될 수 있으며 여기에서 설명하는 실시예에 한정되지 않는다. 그리고 도면에서 본 발명을 명확하게 설명하기 위해서 설명과 관계없는 부분은 생략하였으며, 명세서 전체를 통하여 유사한 부분에 대해서는 유사한 도면 부호를 붙였다.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those of ordinary skill in the art can easily implement them. However, the present invention may be embodied in several different forms and is not limited to the embodiments described herein. And in order to clearly explain the present invention in the drawings, parts irrelevant to the description are omitted, and similar reference numerals are attached to similar parts throughout the specification.
명세서 전체에서, 어떤 부분이 다른 부분과 "연결"되어 있다고 할 때, 이는 "직접적으로 연결"되어 있는 경우뿐 아니라, 그 중간에 다른 소자를 사이에 두고 "전기적으로 연결"되어 있는 경우도 포함한다. 또한 어떤 부분이 어떤 구성요소를 "포함"한다고 할 때, 이는 특별히 반대되는 기재가 없는 한 다른 구성요소를 제외하는 것이 아니라 다른 구성요소를 더 포함할 수 있는 것을 의미한다.Throughout the specification, when a part is "connected" with another part, this includes not only the case of being "directly connected" but also the case of being "electrically connected" with another element interposed therebetween. . Also, when a part "includes" a certain component, it means that other components may be further included, rather than excluding other components, unless otherwise stated.
또한, 제1, 제2 등과 같이 서수를 포함하는 용어는 다양한 구성요소들을 설명하는데 사용될 수 있지만, 상기 구성요소들은 상기 용어들에 의해 한정되지는 않는다. 상기 용어들은 하나의 구성요소를 다른 구성요소로부터 구별하는 목적으로만 사용된다. 예를 들어, 본 발명의 권리 범위를 벗어나지 않으면서 제1 구성요소는 제2 구성요소로 명명될 수 있고, 유사하게 제2 구성요소도 제1 구성요소로 명명될 수 있다. 및/또는 이라는 용어는 복수의 관련된 기재된 항목들의 조합 또는 복수의 관련된 기재된 항목들 중의 어느 항목을 포함한다.Also, terms including an ordinal number, such as first, second, etc., may be used to describe various elements, but the elements are not limited by the terms. The above terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, a first component may be referred to as a second component, and similarly, a second component may also be referred to as a first component. and/or includes a combination of a plurality of related listed items or any of a plurality of related listed items.
본 명세서에 있어서 '부(部)'란, 하드웨어에 의해 실현되는 유닛(unit), 소프트웨어에 의해 실현되는 유닛, 양방을 이용하여 실현되는 유닛을 포함한다. 또한, 1 개의 유닛이 2 개 이상의 하드웨어를 이용하여 실현되어도 되고, 2 개 이상의 유닛이 1 개의 하드웨어에 의해 실현되어도 된다. 한편, '~부'는 소프트웨어 또는 하드웨어에 한정되는 의미는 아니며, '~부'는 어드레싱 할 수 있는 저장 매체에 있도록 구성될 수도 있고 하나 또는 그 이상의 프로세서들을 재생시키도록 구성될 수도 있다. 따라서, 일 예로서 '~부'는 소프트웨어 구성요소들, 객체지향 소프트웨어 구성요소들, 클래스 구성요소들 및 태스크 구성요소들과 같은 구성요소들과, 프로세스들, 함수들, 속성들, 프로시저들, 서브루틴들, 프로그램 코드의 세그먼트들, 드라이버들, 펌웨어, 마이크로코드, 회로, 데이터, 데이터베이스, 데이터 구조들, 테이블들, 어레이들 및 변수들을 포함한다. 구성요소들과 '~부'들 안에서 제공되는 기능은 더 작은 수의 구성요소들 및 '~부'들로 결합되거나 추가적인 구성요소들과 '~부'들로 더 분리될 수 있다. 뿐만 아니라, 구성요소들 및 '~부'들은 디바이스 또는 보안 멀티미디어카드 내의 하나 또는 그 이상의 CPU들을 재생시키도록 구현될 수도 있다.In this specification, a "part" includes a unit realized by hardware, a unit realized by software, and a unit realized using both. In addition, one unit may be implemented using two or more hardware, and two or more units may be implemented by one hardware. Meanwhile, '~ unit' is not limited to software or hardware, and '~ unit' may be configured to be in an addressable storage medium or to reproduce one or more processors. Accordingly, as an example, '~' indicates components such as software components, object-oriented software components, class components, and task components, and processes, functions, properties, and procedures. , subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays and variables. The functions provided in the components and '~ units' may be combined into a smaller number of components and '~ units' or further separated into additional components and '~ units'. In addition, components and '~ units' may be implemented to play one or more CPUs in a device or secure multimedia card.
이하에서 언급되는 '평가자단말기', '제1피평가자단말기', '제2피평가자단말기' 및 '평가교육담당자단말기'는 네트워크를 통해 서버나 타 단말에 접속할 수 있는 컴퓨터나 휴대용 단말기로 구현될 수 있다. 여기서, 컴퓨터는 예를 들어, 웹 브라우저(WEB Browser)가 탑재된 노트북, 데스크톱(desktop), 랩톱(laptop) 등을 포함하고, 휴대용 단말기는 예를 들어, 휴대성과 이동성이 보장되는 무선 통신장치로서, PCS(Personal Communication System), GSM(Global System for Mobile communications), PDC(Personal Digital Cellular), PHS(Personal Handyphone System), PDA(Personal Digital Assistant), IMT(International Mobile Telecommunication)-2000, CDMA(Code Division Multiple Access)-2000, W-CDMA(W-Code Division Multiple Access), Wibro(Wireless Broadband Internet) 단말 등과 같은 모든 종류의 핸드헬드 (Handheld) 기반의 무선 통신 장치를 포함할 수 있다. 또한, "네트워크"는 근거리 통신망(Local Area Network;LAN), 광역 통신망(Wide Area Network; WAN) 또는 부가가치 통신망(Value Added Network; VAN) 등과 같은 유선네트워크나 이동 통신망(mobile radio communication network) 또는 위성 통신망 등과 같은 모든 종류의 무선 네트워크로 구현될 수 있다.The 'evaluator's terminal', 'first evaluated terminal', 'second evaluated terminal' and 'evaluator's terminal' mentioned below may be implemented as a computer or portable terminal that can access a server or other terminal through a network. . Here, the computer includes, for example, a laptop, a desktop, a laptop, etc. equipped with a web browser (WEB Browser), and the portable terminal is, for example, a wireless communication device that guarantees portability and mobility. , PCS (Personal Communication System), GSM (Global System for Mobile communications), PDC (Personal Digital Cellular), PHS (Personal Handyphone System), PDA (Personal Digital Assistant), IMT (International Mobile Telecommunication)-2000, CDMA (Code) Division Multiple Access)-2000, W-Code Division Multiple Access (W-CDMA), Wireless Broadband Internet (Wibro) terminals, etc. may include all types of handheld-based wireless communication devices. In addition, "network" refers to a wired network such as a local area network (LAN), a wide area network (WAN), or a value added network (VAN), or a mobile radio communication network or satellite. It may be implemented in any kind of wireless network such as a communication network.
1. 기계학습모델을 이용하여 면접영상에 대한 자동화된 평가를 제공하는 방법1. A method of providing automated evaluation of interview images using a machine learning model
서버시스템에서 수행하는 자동화된 평가를 위한 심층질문을 도출하는 방법은, 서버시스템에서 수행하는 기계학습모델을 이용하여 면접영상에 대한 자동화된 평가를 제공하는 방법에서 답변영상을 기계학습모델에 입력하여 평가결과를 도출하기 위한 구체적인 방법에 해당할 수 있다.The method of deriving deep questions for automated evaluation performed in the server system is to input the answer image into the machine learning model in the method of providing automated evaluation of the interview image using the machine learning model performed in the server system. It may correspond to a specific method for deriving an evaluation result.
따라서, 본 발명에 해당하는 기계학습모델을 이용하여 면접영상에 대한 자동화된 평가를 위한 심층질문을 도출하는 방법에 대해 설명하기에 앞서, 평가자에 의해 평가된 정보들을 기반으로 기계학습모델을 학습시키고, 학습된 기계학습모델을 통해 면접영상에 대한 평가결과를 도출하는 전체적인 방법에 대하여 우선적으로 설명하도록 한다.Therefore, before explaining the method of deriving in-depth questions for automated evaluation of interview images using the machine learning model corresponding to the present invention, the machine learning model is trained based on the information evaluated by the evaluator and , the overall method of deriving the evaluation result for the interview image through the machine learning model learned will be explained first.
도 1은 본 발명의 일 실시예에 따른 기계학습모델을 이용하여 면접영상에 대한 자동화된 평가를 제공하는 방법을 수행하기 위한 전체적인 시스템의 형태를 개략적으로 도시한다.1 schematically shows the form of an overall system for performing a method for providing an automated evaluation of an interview image using a machine learning model according to an embodiment of the present invention.
도 1에 도시된 바와 같이 면접영상에 대한 자동화된 평가를 제공하는 방법은 서버시스템(1000)에서 수행하고, 상기 서버시스템(1000)은 상기 방법을 수행하기 위하여 외부 단말기에 해당하는 평가자단말기(2000), 제1피평가자단말기(3000), 제2피평가자단말기(4000) 및 평가교육담당자단말기(5000)와 통신을 수행할 수 있다. 상기 서버시스템(1000)은 1 이상의 서버를 포함하고, 각각의 서버는 통신을 수행하여 면접영상에 대한 자동화된 평가를 제공하는 방법을 수행할 수 있다.As shown in FIG. 1 , the method for providing an automated evaluation of an interview image is performed by the server system 1000, and the server system 1000 includes an evaluator terminal 2000 corresponding to an external terminal to perform the method. ), the first terminal to be evaluated (3000), the second terminal to be evaluated (4000), and the terminal in charge of evaluation and education (5000) can communicate with each other. The server system 1000 may include one or more servers, and each server may perform a method of providing an automated evaluation of an interview image by performing communication.
평가자단말기(2000)는 피평가자의 답변영상을 토대로 평가를 수행하는 주체에 해당하는 평가자가 사용하는 단말기에 해당한다. 상기 평가자는 서버시스템(1000)으로부터 상기 평가자단말기(2000)를 통해 답변영상을 제공받아 평가를 수행한다. 평가자가 평가한 정보에 해당하는 제1피평가자역량정보는 후술하는 기계학습모델의 학습데이터로 사용될 수 있다. 또한, 상기 평가자는 본 발명의 평가방법을 교육받기 위하여 모의로 피평가자의 면접영상을 토대로 평가를 수행하는 주체에 해당할 수 있고, 이와 같은 경우 제1피평가자역량정보는 상기 평가교육담당자단말기(5000)를 사용하는 주체인 평가교육담당자에 의해 해당 평가방법을 교육하기 위한 정보로 사용될 수 있다.The evaluator terminal 2000 corresponds to a terminal used by the evaluator, which is a subject who performs evaluation based on the image of the answeree of the evaluator. The evaluator receives an answer image from the server system 1000 through the evaluator terminal 2000 and performs evaluation. The first evaluator competency information corresponding to the information evaluated by the evaluator may be used as learning data of a machine learning model to be described later. In addition, the evaluator may correspond to a subject who performs an evaluation based on the interview image of the evaluator in a simulated manner in order to receive education on the evaluation method of the present invention, and in this case, the first evaluator competency information is the evaluation education officer terminal (5000) It can be used as information for educating the evaluation method by the person in charge of evaluation education, who is the subject of use.
제1피평가자단말기(3000)는 상기 서버시스템(1000)을 통해 제공받은 질문에 따라 답변을 수행하는 주체인 제1피평가자가 사용하는 단말기에 해당한다. 구체적으로 제1피평가자는 서버시스템(1000)으로부터 1 이상의 질문을 제공받아 각각의 제시된 질문에 따라 상기 제1피평가자단말기(3000) 상에서 답변을 수행하고, 제1피평가자가 수행한 답변영상은 상기 서버시스템(1000)으로 송신될 수 있다. 한편, 상기 서버시스템(1000)으로 송신된 제1피평가자가 수행한 답변영상은 상술한 바와 같이 평가자가 평가자단말기(2000)를 통해 제공받아 평가를 수행함으로써 제1피평가자역량정보가 도출될 수 있다.The first evaluator terminal 3000 corresponds to a terminal used by the first evaluator, who is the subject of answering a question provided through the server system 1000 . Specifically, the first evaluator receives one or more questions from the server system 1000 and answers the first evaluator terminal 3000 according to each suggested question, and the answer image performed by the first evaluator is the server system (1000). On the other hand, the first evaluator capability information can be derived by the evaluator receiving the answer image transmitted to the server system 1000 through the evaluator terminal 2000 and performing the evaluation as described above.
제2피평가자단말기(4000)는 상기 서버시스템(1000)을 통해 제공받은 질문에 따라 답변을 수행하는 주체인 제2피평가자가 사용하는 단말기에 해당한다. 구체적으로 제2피평가자는 서버시스템(1000)으로부터 1 이상의 질문을 제공받아 각각의 제시된 질문에 따라 상기 제2피평가자단말기(4000) 상에서 답변을 수행하고, 제2피평가자가 수행한 답변영상은 상기 서버시스템(1000)으로 송신될 수 있다. 한편, 상기 서버시스템(1000)으로 송신된 제2피평가자가 수행한 답변영상은 기계학습모델에 입력되어 상기 서버시스템(1000)에서 답변영상에 대한 자동화된 평가결과에 해당하는 제2피평가자역량정보를 도출한다.The second evaluator terminal 4000 corresponds to a terminal used by the second evaluator, who is the subject of answering the question provided through the server system 1000 . Specifically, the second evaluator receives one or more questions from the server system 1000 and answers the questions on the second evaluator terminal 4000 according to each suggested question, and the answer image performed by the second evaluator is the server system (1000). On the other hand, the answer image performed by the second evaluator transmitted to the server system 1000 is input to the machine learning model, and the second evaluator capability information corresponding to the automated evaluation result for the answer image in the server system 1000 is provided. derive
한편, 도 1에 도시된 상기 서버시스템(1000)과 통신하는 피평가자단말기의 개수는 설명을 용이하게 하기 위하여 도시된 것에 불과하고, 상기 서버시스템(1000)은 1 이상의 피평가자단말기와 통신을 수행할 수 있다. 또한 상기 제1피평가자단말기(3000)에서 제1피평가자가 수행한 답변영상은 평가자단말기(2000)를 통해 평가자가 평가를 수행하기 위한 것으로만 한정되지 않고, 상술한 바와 같이 서버시스템(1000)의 기계학습모델에 상기 제1피평가자가 수행한 답변영상이 입력되어 제2피평가자역량정보가 도출될 수도 있다.On the other hand, the number of terminals to be evaluated communicating with the server system 1000 shown in FIG. 1 is only shown for ease of explanation, and the server system 1000 can communicate with one or more terminals to be evaluated. have. In addition, the answer image performed by the first evaluator in the first evaluator terminal 3000 is not limited only to the evaluator performing the evaluation through the evaluator terminal 2000, and as described above, the machine of the server system 1000 The response image performed by the first evaluator may be input to the learning model to derive second evaluator competency information.
마찬가지로, 상기 제2피평가자단말기(4000)에서 제2피평가자가 수행한 답변영상은 서버시스템(1000)을 통해 제2피평가자역량정보를 도출하기 위한 것으로만 한정되는 것이 아니라, 상기 제2피평가자가 수행한 답변영상은 상기 평가자에게 제공되어 평가자가 평가를 수행하기 위한 용도로 사용될 수도 있다. 이와 같이, 제1피평가자 및 제2피평가자는 설명을 용이하게 하기 위한 것이며, 제1 및 제2의 기재는 구성의 차이를 의미하지는 않는다.Similarly, the answer image performed by the second evaluator in the second evaluator terminal 4000 is not limited to deriving the second evaluator competency information through the server system 1000, but is performed by the second evaluator. The answer image may be provided to the evaluator and used for the evaluator to perform the evaluation. As described above, the first and second assessees are for ease of explanation, and the descriptions of the first and the second do not imply a difference in configuration.
한편, 평가교육담당자단말기(5000)는 답변영상을 기반으로 평가하는 방식에 대해 전문성을 갖춘 주체에 해당하는 평가교육담당자가 사용하는 단말기로써, 평가교육담당자는 동일하게 피평가자가 수행한 답변영상에 대해 상기 평가교육담당자단말기(5000)를 통해 평가를 수행하여 평가결과를 서버시스템(1000)으로 송신하고, 해당 평가결과는 본 발명의 평가방법을 교육받는 주체에게 제공되어 해당 주체로 하여금 자신이 모의로 평가한 결과와 평가교육담당자가 평가한 결과를 비교할 수 있도록 한다.On the other hand, the evaluation education officer terminal 5000 is a terminal used by the evaluation education officer corresponding to the subject with expertise in the evaluation method based on the answer image. The evaluation is performed through the evaluation education manager terminal 5000 and the evaluation result is transmitted to the server system 1000, and the evaluation result is provided to the subject receiving the evaluation method of the present invention, so that the subject can simulate Make it possible to compare the evaluation results with those evaluated by the person in charge of evaluation education.
본 발명의 다른 실시예에서는 평가자, 피평가자 및 평가교육담당자 별로 상응하는 계정타입들이 서버시스템(1000) 상에 존재하고, 특정 단말기에서 평가자, 피평가자 및 평가교육담당자 각각에 상응하는 계정타입을 사용하여 상기 서버시스템(1000)과 통신을 수행할 수 있고, 상기 특정단말기는 각각의 계정타입에 상응하는 정보들을 수신하여 각각의 주체에게 제공할 수도 있다.In another embodiment of the present invention, the account types corresponding to each evaluator, the subject and the person in charge of evaluation and education exist on the server system 1000, and the account type corresponding to each of the rater, the subject and the person in charge of evaluation and education is used in a specific terminal. Communication with the server system 1000 may be performed, and the specific terminal may receive information corresponding to each account type and provide it to each subject.
도 2는 본 발명의 일 실시예에 따른 서버시스템(1000)의 내부 구성을 개략적으로 도시한다.2 schematically shows an internal configuration of a server system 1000 according to an embodiment of the present invention.
도 2에 도시된 바와 같이 서버시스템(1000)은 평가인터페이스제공부(1100), 역량정보수신부(1200), 모델학습부(1300), 질문제공부(1400), 역량정보도출부(1500) 및 DB(1600)를 포함할 수 있다.As shown in FIG. 2 , the server system 1000 includes an evaluation interface providing unit 1100 , a competency information receiving unit 1200 , a model learning unit 1300 , a question providing unit 1400 , a competency information deriving unit 1500 and DB 1600 may be included.
상기 평가인터페이스제공부(1100)는 평가자에게 평가자단말기를 통해 피평가자가 수행한 답변영상을 제공하고, 평가자로 하여금 제1피평가자역량정보를 입력받는 평가인터페이스를 제공한다. 따라서 평가자는 평가자단말기(2000)에 표시되는 평가인터페이스를 통해 피평가자가 수행한 답변영상을 시청함과 동시에 자신이 평가한 내용들을 확인할 수 있다.The evaluation interface providing unit 1100 provides the evaluator with an image of the answer performed by the evaluator through the evaluator terminal, and provides an evaluation interface through which the evaluator receives the first evaluator competency information. Accordingly, the evaluator can view the answer image performed by the evaluator through the evaluation interface displayed on the evaluator terminal 2000 and simultaneously check the contents of his or her evaluation.
상기 역량정보수신부(1200)는 상기 평가자가 평가인터페이스를 통해 입력한 제1피평가자역량정보를 상기 평가자단말기(2000)로부터 수신한다.The competency information receiving unit 1200 receives the first evaluator competency information input by the evaluator through the evaluation interface from the evaluator terminal 2000 .
상기 모델학습부(1300)는 기계학습모델을 학습시키는 역할을 수행하고, 이를 위해 상기 역량정보수신부(1200)에서 수신한 제1피평가자역량정보를 학습데이터로 사용할 수 있다. 더 구체적으로 상기 모델학습부(1300)는 상기 제1피평가자역량정보를 기계학습모델의 학습에 적합한 형태로 가공하여, 상기 기계학습모델을 학습시킬 수 있다.The model learning unit 1300 may play a role of learning the machine learning model, and for this purpose, the first subject competency information received from the competency information receiving unit 1200 may be used as learning data. More specifically, the model learning unit 1300 may train the machine learning model by processing the first subject competency information into a form suitable for learning the machine learning model.
상기 질문제공부(1400)는 서버시스템(1000)에서 특정 역량에 대한 답변영상에 대한 평가를 수행하기 위하여 기설정된 1 이상의 질문을 피평가자에게 제공한다. 더 구체적으로 상기 질문제공부(1400)는 피평가자가 선택한 역량, 피평가자가 지원한 기업 혹은 기업의 직무에 해당하는 역량에 관련된 1 이상의 질문들을 해당 피평가자에게 제공할 수 있다.The question providing unit 1400 provides one or more preset questions to the evaluator in order to evaluate the answer image for a specific capability in the server system 1000 . More specifically, the question providing unit 1400 may provide one or more questions related to the competency selected by the evaluated person, the company supported by the evaluated person, or the competency corresponding to the job of the company to the evaluated person.
상기 역량정보도출부(1500)는 질문제공부(1400)를 통해 제공받은 질문에 대해 제2피평가자가 수행한 답변영상에 기초하여 제2피평가자역량정보를 도출한다. 구체적으로 상기 역량정보도출부(1500)는 기계학습모델에 상기 제2피평가자가 수행한 답변영상을 입력하여 상기 제2피평가자역량정보를 도출할 수 있다. 또한, 상기 역량정보도출부(1500)는 평가하고자 하는 특정 역량에 대한 특정 행동지표가 관찰되지 않는 경우, 해당 특정 행동지표와 관련된 심층질문을 설정하여, 해당 심층질문에 대한 제2피평가자가 수행한 답변영상을 더 고려하여 상기 제2피평가자역량정보에 해당하는 종합평가정보를 도출할 수도 있으며, 이에 대해서는 도 10에서 더욱 상세하게 설명하도록 한다.The competency information derivation unit 1500 derives second evaluator competency information based on the image of the answer performed by the second evaluator to the question provided through the question providing unit 1400 . Specifically, the capability information derivation unit 1500 may derive the second assessee capability information by inputting the image of the answer performed by the second assessee to the machine learning model. In addition, when a specific behavioral indicator for a specific competency to be evaluated is not observed, the competency information derivation unit 1500 sets an in-depth question related to the specific behavioral indicator, Comprehensive evaluation information corresponding to the second subject competency information may be derived by further considering the response image, which will be described in more detail with reference to FIG. 10 .
한편, 상기 DB(1600)에는 각각의 역량에 관련된 질문컨텐츠(1610), 각각의 역량에 관련된 행동지표(1620), 제1피평가자역량정보(1630), 제2피평가자역량정보(1640), 평가자의 아이디, 비밀번호, 이름, 평가수행내역 등의 평가자 별 개인정보에 해당하는 평가자정보(1650), 피평가자의 아이디, 비밀번호, 답변수행내역 등의 피평가자 별 개인정보에 해당하는 피평가자정보(1660), 각각의 피평가자가 수행한 답변영상(1670) 및 기계학습모델(1680)이 저장될 수 있다. 또한 도 2에서는 도시하지 않았으나, 상기 DB(1600)에는 역량정보도출부(1500)에서 심층질문을 설정하기 위한 기계학습 기반의 심층질문추천모델이 추가적으로 저장될 수도 있다.On the other hand, in the DB 1600, the question content 1610 related to each competency, the behavior index 1620 related to each competency, the first assessee competency information 1630, the second assessee competency information 1640, the evaluator's Evaluator information (1650), which corresponds to personal information of each evaluator such as ID, password, name, and evaluation performance details, evaluator information (1660), which corresponds to personal information per evaluator such as the evaluator’s ID, password, and answer performance history, each The answer image 1670 and the machine learning model 1680 performed by the subject may be stored. In addition, although not shown in FIG. 2 , a machine learning-based deep question recommendation model for setting deep questions in the capability information derivation unit 1500 may be additionally stored in the DB 1600 .
상기 기계학습모델은 답변영상을 기반으로 평가를 하기 위한 기계학습된 모델로써, 바람직하게는 기계학습모델은 평가하고자 하는 각각의 역량 별로 개별적으로 구비되어, 상기 서버시스템(1000)에는 복수의 기계학습모델을 포함할 수도 있다.The machine learning model is a machine-learning model for evaluation based on the answer image, and preferably, the machine learning model is individually provided for each competency to be evaluated, and the server system 1000 includes a plurality of machine learning models. Models may also be included.
본 발명의 다른 실시예에서는, 상기 서버시스템(1000)은 2 이상의 서버를 포함할 수 있고, 각각의 서버에는 상술한 구성들 가운데 일부를 포함하고, 각각의 서버가 통신을 수행하여 기계학습모델을 이용하여 면접영상에 대한 자동화된 평가를 제공하는 방법을 수행할 수도 있다. 예를 들어, 피평가자 혹은 평가자에게 제공되는 기능들은 특정 서버에 포함되고, 기계학습모델 및 기계학습모델을 학습하는 기능들은 또 다른 특정서버에 포함되어, 상기 특정 서버 및 상기 또 다른 특정서버간의 통신을 통해 본 발명의 기계학습모델을 이용하여 면접영상에 대한 자동화된 평가를 제공하는 방법이 수행될 수 있다.In another embodiment of the present invention, the server system 1000 may include two or more servers, each server includes some of the above-described configurations, and each server performs communication to create a machine learning model. It is also possible to perform a method of providing an automated evaluation of the interview image by using it. For example, the functions provided to the evaluator or the evaluator are included in a specific server, and the machine learning model and the functions for learning the machine learning model are included in another specific server, so that communication between the specific server and the other specific server is performed. Through the use of the machine learning model of the present invention, a method of providing an automated evaluation of an interview image can be performed.
도 3은 본 발명의 일 실시예에 따른 평가하고자 하는 특정 역량에 따라 설정된 행동지표 및 이에 따라 피평가자에게 제공되는 질문의 구성을 개략적으로 도시한다.3 schematically illustrates a configuration of a behavioral index set according to a specific competency to be evaluated and a question provided to a subject to be evaluated according to an exemplary embodiment of the present invention.
도 3에 도시된 바와 같이, 본 발명에서는 피평가자의 역량을 평가하기 위하여 평가하고자 하는 각각의 역량 별로 1 이상의 행동지표 및 1 이상의 질문들이 설정될 수 있다.As shown in FIG. 3 , in the present invention, one or more behavioral indicators and one or more questions may be set for each competency to be evaluated in order to evaluate the competency of the subject.
행동지표는 역량을 평가하기 위한 평가기준으로써, 평가자는 피평가자의 답변에서 행동지표가 관찰되는 답변들을 체크함으로써 피평가자가 해당 역량을 어느 정도 갖추고 있는지를 평가할 수 있다.The behavioral indicator is an evaluation standard for evaluating the competency, and the evaluator can evaluate the extent to which the evaluated person possesses the corresponding competency by checking the responses in which the behavioral indicator is observed in the evaluated answer.
질문은 피평가자의 답변에서 1 이상의 행동지표가 관찰될 수 있도록 설계된다. 예를 들어, '팀원들 간의 갈등은 어떻게 해결했습니까?'의 질문에 대한 피평가자의 답변에서 '팀의 목표를 위해 팀원들의 협업을 유도한다.'의 행동지표가 관찰될 수 있다. 이를 위해 각각의 질문은 상황(Situation), 과제(Task), 행동(Action) 및 결과(Result) 중 1 이상에 대한 답변을 유도할 수 있는 형태로 설계될 수 있다.The question is designed so that one or more behavioral indicators can be observed in the respondent's answer. For example, in the respondent's answer to the question of 'How did you resolve conflicts between team members?', a behavioral index of 'Inducing team members to collaborate for the purpose of the team' can be observed. To this end, each question may be designed in a form that can induce answers to one or more of a situation, a task, an action, and a result.
한편, 이와 같이 역량 별로 설계된 질문들은 상기 질문제공부(1400)를 통해 제1피평가자단말기(3000) 혹은 제2피평가자단말기(4000)를 통해 각각 제1피평가자 혹은 제2피평가자에게 제공될 수 있다. 또한, 기업별로 요구되는 역량이 상이할 수 있고, 동일한 기업이라도 직무에 따라 요구되는 역량이 상이할 수 있으므로, 상기 질문제공부(1400)는 피평가자가 지원한 기업, 모의 면접을 수행하고자 하는 기업 혹은 지원한 기업 또는 모의면접을 수행하고자 하는 기업의 직무에 따라 적합한 질문들을 해당 피평가자에게 제공할 수 있다.Meanwhile, the questions designed for each competency as described above may be provided to the first evaluated or the second evaluated through the first evaluated terminal 3000 or the second evaluated terminal 4000 through the question providing unit 1400, respectively. In addition, since the competencies required for each company may be different, and even the same company may have different competencies depending on the job, the question providing unit 1400 is the company to which the appraiseee has applied, the company that wants to conduct a mock interview, or Depending on the job of the company to which the applicant has applied or the company for which the mock interview is to be conducted, appropriate questions may be provided to the subject to be evaluated.
도 4는 본 발명의 일 실시예에 따른 서버시스템(1000)에서 수행하는 기계학습모델을 이용하여 면접영상에 대한 자동화된 평가를 제공하는 방법을 개략적으로 도시한다.4 schematically illustrates a method of providing an automated evaluation of an interview image using a machine learning model performed by the server system 1000 according to an embodiment of the present invention.
도 4에 도시된 바와 같이, 서버시스템(1000)에서 수행되는 피평가자의 답변영상에 대한 자동화된 평가방법으로서, 특정 역량에 대한 평가를 수행하기 위하여 기설정된 1 이상의 질문에 대하여 제1피평가자가 수행한 답변영상을 포함하는 평가인터페이스를 평가자에게 제공하는 평가인터페이스제공단계(S10); 상기 평가자로부터 상기 평가인터페이스 상에서 상기 특정 역량에 대한 평가정보 및 상기 평가정보에 상응하는 행동지표를 포함하는 제1피평가자역량정보를 수신하는 역량정보수신단계(S11); 상기 제1피평가자역량정보에 기초하여 상기 특정 역량에 대한 기계학습모델을 학습시키는 모델학습단계(S12); 상기 특정 역량에 대한 평가를 수행하기 위하여 기설정된 1 이상의 질문을 제2피평가자에게 제공하는 질문제공단계(S13); 및 상기 질문제공단계(S13)를 통해 제공된 1 이상의 질문에 대하여 상기 제2피평가자가 수행한 답변영상을 상기 기계학습모델에 입력하여 상기 특정 역량에 대한 평가정보를 포함하는 제2피평가자역량정보를 도출하는 역량정보도출단계(S14);를 포함할 수 있다.As shown in FIG. 4 , as an automated evaluation method for the answer image of the evaluator performed in the server system 1000, the first evaluator performs one or more preset questions to evaluate a specific competency. an evaluation interface providing step (S10) of providing an evaluation interface including an answer image to the evaluator; a capability information receiving step (S11) of receiving, from the evaluator, first assessee capability information including assessment information on the specific capability and a behavior index corresponding to the assessment information on the assessment interface; a model learning step (S12) of learning a machine learning model for the specific competency based on the first subject competency information; a question providing step (S13) of providing one or more preset questions to a second evaluator in order to evaluate the specific competency; and inputting an image of an answer performed by the second evaluator to the machine learning model to the one or more questions provided through the question providing step (S13) to derive second evaluator competency information including evaluation information on the specific competency It may include a; capability information derivation step (S14).
구체적으로, 답변영상에 대한 자동화된 평가를 제공하는 방법을 수행하기 위하여 서버시스템(1000)은 제1피평가자가 수행한 답변영상을 포함하는 평가인터페이스를 평가자에게 제공하는 평가인터페이스제공단계(S10)를 수행한다. 이를 위해 상기 제1피평가자는 제1피평가자단말기(3000)를 통해 서버시스템(1000)에 평가를 요청할 수 있고, 상기 서버시스템(1000)의 질문제공부(1400)는 해당 요청에 상응하는 1 이상의 질문을 상기 제1피평가자단말기(3000)에 제공하여 제1피평가자는 답변영상을 생성할 수 있다. 이와 같이 생성된 제1피평가자가 수행한 답변영상은 서버시스템(1000)으로 송신되어 DB(1600)에 저장될 수 있다.Specifically, in order to perform the method of providing an automated evaluation of the response image, the server system 1000 provides the evaluation interface including the response image performed by the first evaluator to the evaluator, the evaluation interface providing step (S10). carry out To this end, the first evaluated person may request the evaluation from the server system 1000 through the first evaluated terminal 3000 , and the question providing unit 1400 of the server system 1000 may ask one or more questions corresponding to the request. is provided to the first evaluator terminal 3000 so that the first evaluator can generate an answer image. The response image performed by the first evaluator generated in this way may be transmitted to the server system 1000 and stored in the DB 1600 .
이후 평가자가 평가자단말기(2000)를 통해 답변영상을 요청하는 경우, 서버시스템(1000)은 상기 평가인터페이스제공단계(S10)를 수행할 수 있다. 상기 평가인터페이스제공단계(S10)에서는 평가자의 요청에 상응하는 제1피평가자가 수행한 답변영상을 포함하는 평가인터페이스가 상기 평가자단말기(2000)에 디스플레이되도록 한다.Thereafter, when the evaluator requests an answer image through the evaluator terminal 2000 , the server system 1000 may perform the evaluation interface providing step ( S10 ). In the step of providing the evaluation interface ( S10 ), the evaluation interface including the answer image performed by the first evaluator corresponding to the request of the evaluator is displayed on the evaluator terminal 2000 .
한편, 평가자는 상기 평가자단말기(2000)에 디스플레이된 평가인터페이스를 통해 제1피평가자가 수행한 답변영상에 관련된 특정 역량에 대한 평가정보 및 각각의 평가정보에 상응하는 행동지표를 포함하는 제1피평가자역량정보를 입력하고, 상기 평가자단말기(2000)는 상기 입력된 제1피평가자역량정보를 상기 서버시스템(1000)으로 송신하고, 상기 서버시스템(1000)의 역량정보수신부(1200)는 역량정보수신단계(S11)를 수행하여 상기 제1피평가자역량정보를 수신한다.On the other hand, the evaluator is the first evaluator competency including the evaluation information on the specific competency related to the answer image performed by the first evaluator through the evaluation interface displayed on the evaluator terminal 2000 and the behavioral index corresponding to each evaluation information. Information is input, and the evaluator terminal 2000 transmits the input first evaluator competency information to the server system 1000, and the competency information receiving unit 1200 of the server system 1000 performs the competency information receiving step ( S11) is performed to receive the first assessee competency information.
더 구체적으로, 상기 제1피평가자역량정보에 포함되는 평가정보는 상기 제1피평가자의 답변 가운데 해당 행동지표를 관찰할 수 있는 정보에 해당할 수 있다.More specifically, the evaluation information included in the first evaluator's competency information may correspond to information for observing the corresponding behavioral index among the answers of the first evaluator.
상기 모델학습단계(S12)는 역량정보수신단계(S11)를 통해 수신한 복수의 제1피평가자역량정보에 기초하여 특정 역량에 대한 기계학습모델을 학습시킬 수 있다. 상기 서버시스템(1000)에는 각각의 역량 별로 평가를 수행하는 1 이상의 기계학습모델을 포함하므로, 상기 모델학습단계(S12)에서 특정 역량의 기계학습모델을 학습시키는 경우, 해당 특정 역량에 대한 제1피평가자역량정보들을 학습데이터로 사용하거나 혹은 상기 모델학습단계(S12)는 특정 역량에 대한 제1피평가자역량정보 및 특정 역량이 아닌 타 역량에 대한 제1피평가자역량정보들이 구분되도록 별도의 라벨링을 수행하고, 라벨링된 제1피평가자역량정보들을 학습데이터로 사용할 수 있다.The model learning step (S12) may learn a machine learning model for a specific competency based on the plurality of first evaluator capability information received through the capability information receiving step (S11). Since the server system 1000 includes one or more machine learning models that perform evaluation for each competency, when learning a machine learning model of a specific capability in the model learning step (S12), the first Separate labeling is performed using the subject competency information as learning data, or the model learning step (S12) is performed so that the first evaluator competency information for a specific competency and the first evaluator competency information for other competencies other than the specific competency are distinguished. , the labeled first assessee competency information can be used as learning data.
또한, 상기 기계학습모델은 학습데이터로 사용하는 각각의 제1피평가자역량정보에 상응하는 답변영상들을 학습데이터로 사용할 수도 있다.In addition, the machine learning model may use the answer images corresponding to each of the first subject competency information used as the learning data as the learning data.
한편, 제2피평가자가 상기 제2피평가자단말기(4000)를 통해 평가를 요청하는 경우, 상기 서버시스템(1000)은 해당 요청에 상응하는 기설정된 1 이상의 질문을 상기 제2피평가자에게 제공하는 질문제공단계(S13)를 수행한다. 상술한 제1피평가자가 상기 서버시스템(1000)에 평가를 요청하는 것과 제2피평가자가 상기 서버시스템(1000)에 평가를 요청하는 것과 같이 피평가자가 서버시스템(1000)에 평가를 요청하는 것은 평가자로부터 직접 평가를 받기 위한 요청 및 서버시스템(1000)의 기계학습모델을 통해 평가를 받기 위한 요청으로 구분되는 요청에 해당하거나, 혹은 평가자로부터의 평가 및 기계학습모델을 통한 평가 모두를 요청하는 것에 해당할 수도 있다. 또한 상기 제1피평가자 혹은 상기 제2피평가자의 요청에는 자신이 평가 받고자 하는 특정 기업, 특정 기업의 직무 혹은 특정 역량을 요청하는 정보가 포함될 수 있다.On the other hand, when the second evaluator requests the evaluation through the second evaluator terminal 4000, the server system 1000 provides a question providing step of providing one or more preset questions corresponding to the request to the second evaluator (S13) is performed. Like the above-described first evaluator requesting evaluation from the server system 1000 and the second evaluator requesting evaluation from the server system 1000, the evaluator requests evaluation from the server system 1000 from the evaluator. It may correspond to a request divided into a request for direct evaluation and a request for evaluation through the machine learning model of the server system 1000, or to request both evaluation from the evaluator and evaluation through the machine learning model. may be In addition, the request of the first evaluated or the second evaluated may include information requesting a specific company to be evaluated, a job of a specific company, or a specific competency.
이어서, 상기 질문제공단계(S13)를 통해 평가를 요청한 상기 제2피평가자는 1 이상의 질문을 제공받고 상기 제2피평가자단말기(4000)를 통해 답변영상을 생성한다. 상기 제2피평가자단말기(4000)는 생성된 제2피평가자가 수행한 답변영상을 상기 서버시스템(1000)으로 송신하고, 상기 서버시스템(1000)은 수신한 제2피평가자가 수행한 답변영상을 기계학습모델에 입력하여 제2피평가자역량정보를 도출하는 역량정보도출단계(S14)를 수행한다.Next, the second evaluator who has requested evaluation through the question providing step S13 is provided with one or more questions and generates an answer image through the second evaluator terminal 4000 . The second evaluator terminal 4000 transmits the generated answer image performed by the second evaluator to the server system 1000, and the server system 1000 performs machine learning on the received reply image performed by the second evaluator. The capability information derivation step (S14) of deriving the second evaluator capability information by input to the model is performed.
상기 역량정보도출단계(S14)를 통해 도출되는 제2피평가자역량정보는 상기 서버시스템(1000)에서 자체적으로 해당 제2피평가자가 수행한 답변영상에 기초하여 도출된 역량정보로써, 상술한 평가자가 평가한 제1피평가자역량정보와 유사한 형태의 정보를 도출하거나 혹은 해당 제2피평가자가 수행한 답변영상에서 평가하고자 하는 특정 역량에 대한 1 이상의 행동지표의 발견확률에 대한 정보를 포함하는 것과 같이 상기 제1피평가자역량정보와는 상이한 형태로 도출될 수도 있다.The second assessee capability information derived through the capability information derivation step (S14) is capability information derived based on the answer image performed by the second assessee in the server system 1000 by itself, and is evaluated by the evaluator In such a way as to derive information in a form similar to that of a first evaluator’s competency information or include information on the probability of discovery of one or more behavioral indicators for a specific competency to be evaluated in the response image performed by the second evaluator, the first It may be derived in a different form from the subject's competency information.
한편, 상기 역량정보도출단계(S14)는 제2피평가자가 수행한 답변영상에 대하여 제1출력정보를 도출하고, 제1출력정보에 포함된 1 이상의 도출행동지표 및 평가하고자 하는 특정 역량에 대한 복수의 행동지표에 기초하여 상기 1 이상의 도출행동지표에 포함되지 않는 1 이상의 행동지표와 관련된 심층질문을 설정하고, 해당 심층질문을 상기 제2피평가자에게 제공하여 도출행동지표에 포함되지 않는 1 이상의 행동지표에 대한 답변영상에 대한 제2출력정보를 도출하여, 최종적으로 제2피평가자역량정보에 해당하는 종합평가정보를 도출할 수 있으며, 이에 대해서는 도 11에서 더욱 상세하게 설명하도록 한다.On the other hand, in the capability information derivation step (S14), the first output information is derived with respect to the answer image performed by the second evaluated person, and one or more derived behavior indicators included in the first output information and a plurality of specific capabilities to be evaluated Based on the behavioral indicators of By deriving the second output information for the response image to , it is possible to finally derive the comprehensive evaluation information corresponding to the second subject competency information, which will be described in more detail with reference to FIG. 11 .
한편, 상기 자동화된 평가방법은, 상기 역량정보도출단계(S14)에서 도출된 복수의 제2피평가자역량정보를 종합기계학습모델에 입력하여 상기 제2피평가자의 특정 역량의 보유 정도에 대한 스코어정보를 포함하는 종합피평가자역량정보를 도출하는 종합역량정보도출단계(S15);를 더 포함할 수 있다.On the other hand, in the automated evaluation method, the plurality of second evaluator competency information derived in the competency information derivation step (S14) is input to the comprehensive machine learning model, and the score information on the degree of retention of the specific competency of the second evaluator is obtained. It may further include; a comprehensive competency information derivation step (S15) of deriving the comprehensive subject competency information that includes.
상술한 바와 같이, 역량정보도출단계(S14)에서 제2피평가자가 수행한 답변영상을 기계학습모델에 입력하여 제2피평가자역량정보를 도출하는 것으로 해당 제2피평가자의 특정 역량의 보유 정도에 대한 평가가 이루어질 수 있으나, 본 발명의 다른 실시예에서는, 역량정보도출단계(S14)에서는 특정 역량에 대한 복수의 질문 각각에 대하여 제2피평가자가 수행한 답변영상 각각에 대한 제2피평가자역량정보를 도출한다.As described above, in the capability information derivation step (S14), the response image performed by the second evaluator is input to the machine learning model to derive the second evaluator competency information, and the second evaluator's specific competency is evaluated may be made, but in another embodiment of the present invention, in the capability information derivation step (S14), the second assessee capability information for each answer image performed by the second assessee for each of a plurality of questions about a specific capability is derived. .
한편, 상기 종합역량정보도출단계(S15)에서는 역량정보도출단계(S14)에서 도출한 복수의 제2피평가자역량정보에 기초하여 종합피평가자역량정보를 도출한다. 구체적으로, 상기 종합역량정보도출단계(S15)는 서버시스템(1000)에 포함된 종합기계학습모델에 상기 복수의 제2피평가자역량정보를 입력하여 상기 종합피평가자역량정보를 도출할 수 있다.On the other hand, in the step of deriving the comprehensive competency information (S15), the comprehensive subject competency information is derived based on the plurality of second evaluator competency information derived in the step of deriving the competency information (S14). Specifically, in the step of deriving comprehensive competency information (S15), the plurality of second subject competency information may be input to the comprehensive machine learning model included in the server system 1000 to derive the comprehensive subject competency information.
상기 종합피평가자역량정보는 제2피평가자에게 제공된 복수의 질문 각각에 대한 답변영상 각각에 대해 도출된 복수의 제2피평가자역량정보를 종합하여, 해당 제2피평가자가 평가하고자 하는 특정 역량의 보유 정도를 종합적으로 평가한 정보에 해당하며, 평가자가 상기 평가인터페이스 상에서 입력하는 평가점수와 유사하게 상기 종합피평가자역량정보는 특정 역량의 보유 정도에 대한 스코어정보를 포함하므로, 상기 종합피평가자역량정보를 통해 해당 제2피평가자의 특정 역량의 보유 정도를 정량적으로 인지할 수 있는 효과를 발휘할 수 있다.The comprehensive evaluator competency information synthesizes a plurality of second evaluator competency information derived for each answer image to each of a plurality of questions provided to the second evaluator, and comprehensively calculates the degree of retention of a specific competency that the second evaluator intends to evaluate. Similar to the evaluation score input by the evaluator on the evaluation interface, the comprehensive evaluated competency information includes score information on the degree of possessing a specific competency. It can exert the effect of quantitatively recognizing the degree of possessing the specific competency of the evaluated person.
한편, 상기 종합기계학습모델은 상술한 기계학습모델과 구분되는 별도의 기계학습 기반의 모델에 해당하거나 혹은 상기 종합기계학습모델 및 상기 기계학습모델은 전체의 기계학습모델 내에 포함되어, 상기 기계학습모델에서 도출된 제2피평가자역량정보가 상기 종합기계학습모델에 입력되어 종합피평가자역량정보가 도출될 수도 있다.On the other hand, the comprehensive machine learning model corresponds to a separate machine learning-based model distinguished from the above-described machine learning model, or the comprehensive machine learning model and the machine learning model are included in the overall machine learning model, the machine learning The second subject competency information derived from the model may be input to the comprehensive machine learning model to derive comprehensive subject competency information.
도 5는 본 발명의 일 실시예에 따른 피평가자가 질문에 따른 답변을 수행하는 화면을 개략적으로 도시한다.5 schematically illustrates a screen in which an evaluator performs an answer to a question according to an embodiment of the present invention.
도 5에 도시된 바와 같이, 상기 제1피평가자단말기(3000) 혹은 상기 제2피평가자단말기(4000)는 서버시스템(1000)에서 수행하는 질문제공단계(S13)를 통해 1 이상의 질문을 제공받아 답변영상을 생성할 수 있다. 상기 질문제공단계(S13)는 제1피평가자 혹은 제2피평가자의 요청에 상응하는 1 이상의 기설정된 질문을 상기 제1피평가자단말기(3000) 혹은 상기 제2피평가자단말기(4000)에 제공한다. 예를 들어, 특정 기업의 직무와 관련된 평가를 요청하는 경우, 상기 질문제공단계(S13)는 상기 특정 기업의 직무와 연관된 1 이상의 역량과 관련된 질문들을 제공할 수 있다.As shown in FIG. 5 , the first evaluated terminal 3000 or the second evaluated terminal 4000 is provided with one or more questions through the question providing step ( S13 ) performed by the server system 1000 to receive an answer image can create In the question providing step ( S13 ), one or more preset questions corresponding to the requests of the first or second evaluated persons are provided to the first evaluated terminal 3000 or the second evaluated terminal 4000 . For example, when an evaluation related to a job of a specific company is requested, the question providing step S13 may provide questions related to one or more capabilities related to the job of the specific company.
한편, 질문을 제공받은 상기 제1피평가자단말기(3000) 혹은 상기 제2피평가자단말기(4000)에서는 단말기에 구비된 촬영모듈을 통해 해당 질문에 대한 피평가자의 답변영상을 촬영한다. 도 5에서는 제1피평가자단말기(3000) 혹은 제2피평가자단말기(4000)의 화면에는 질문제공단계(S13)에서 제공한 질문, 해당 질문에 대한 답변제한시간 및 답변진행시간이 하단에 표시되고, 상단에는 실시간으로 피평가자의 답변영상이 표시되는 구성을 도시하고 있다. 다만, 본 발명에서는 이에 한정되지 않고, 상기 질문이 우선 표시되고 이후에 화면이 전환되어 피평가자의 실시간 답변영상만 표시되거나, 혹은 상기 질문이 텍스트 형태뿐만 아니라 사운드 형태로 제공되는 등 상기 제1피평가자단말기(3000) 혹은 상기 제2피평가자단말기(4000)에서는 다양한 표시 방법으로 화면이 구성될 수도 있다.On the other hand, the first terminal to be evaluated 3000 or the second terminal to be evaluated 4000 to which the question is provided, captures an image of the person's answer to the question through a photographing module provided in the terminal. In FIG. 5, on the screen of the first evaluated terminal 3000 or the second evaluated terminal 4000, the question provided in the question providing step (S13), the time limit for answering the question, and the answer progress time are displayed at the bottom, and at the top shows a configuration in which an image of the respondent's answer is displayed in real time. However, the present invention is not limited thereto, and the question is displayed first and then the screen is switched to display only the real-time answer image of the subject, or the question is provided not only in text form but also in sound form. In (3000) or the second terminal to be evaluated (4000), a screen may be configured in various display methods.
이와 같이, 상기 제1피평가자단말기(3000) 및 상기 제2피평가자단말기(4000)에서는 질문제공단계(S13)를 통해 제공받은 1 이상의 질문에 대한 해당 피평가자가 수행한 답변영상을 생성하고, 생성된 답변영상을 상기 서버시스템(1000)으로 송신함으로써 해당 답변영상에 대한 평가가 수행될 수 있다.In this way, the first evaluated terminal 3000 and the second evaluated terminal 4000 generate an image of an answer performed by the evaluated person to one or more questions provided through the question providing step S13, and the generated answer By transmitting the image to the server system 1000 , evaluation of the corresponding answer image may be performed.
도 6은 본 발명의 일 실시예에 따른 평가인터페이스의 구성을 개략적으로 도시한다.6 schematically shows the configuration of an evaluation interface according to an embodiment of the present invention.
서버시스템(1000)에서 수행하는 평가인터페이스제공단계(S10)를 통해 상기 평가자단말기(2000)에서는 평가인터페이스가 디스플레이될 수 있다. 상기 평가인터페이스는 제1피평가자가 수행한 답변영상에 기초하여 평가자가 평가를 수행하기 위한 요소들이 표시되고, 평가자의 입력에 따라 해당 답변영상에 대한 평가가 이루어질 수 있다.The evaluation interface may be displayed on the evaluator terminal 2000 through the step S10 of providing the evaluation interface performed by the server system 1000 . In the evaluation interface, elements for the evaluator to evaluate based on the answer image performed by the first evaluator are displayed, and the corresponding answer image can be evaluated according to the evaluator's input.
구체적으로, 상기 평가인터페이스는 상기 제1피평가자가 수행한 답변영상이 표시되는 답변영상레이어(L1)를 포함한다. 상기 답변영상레이어(L1) 상에서의 평가자의 재생입력에 따라 해당 답변영상이 재생되어, 평가자가 해당 답변영상의 내용들을 확인할 수 있다. 한편, 상기 답변영상레이어(L1)의 하단에는 해당 답변영상에 관한 질문, 더 구체적으로는 상기 질문제공단계(S13)에서 해당 답변영상을 생성하기 위하여 제공한 질문이 텍스트 형태로 표시되어 평가자가 해당 답변영상이 어떤 질문에 의해 생성된 것인지를 더욱 명확하게 인지할 수 있다.Specifically, the evaluation interface includes an answer image layer L1 in which an answer image performed by the first evaluated person is displayed. The corresponding answer image is reproduced according to the replay input of the evaluator on the answer image layer (L1), so that the evaluator can check the contents of the answer image. On the other hand, at the bottom of the answer image layer (L1), a question about the answer image, more specifically, the question provided to generate the answer image in the question providing step (S13) is displayed in text form, so that the evaluator can It is possible to recognize more clearly which question the answer image is generated by.
한편, 상기 평가인터페이스제공단계(S10)에서 상기 평가자에게 제공되는 평가인터페이스는, 상기 제1피평가자 수행한 답변영상에 기초하여 생성된 스크립트가 표시되는 스크립트레이어(L2);를 포함하고, 상기 스크립트레이어(L2)는, 상기 평가자가 상기 스크립트의 특정 영역을 선택하는 경우, 해당 질문 혹은 상기 특정 역량에 상응하는 1 이상의 행동지표를 포함하는 행동지표리스트영역(A1)이 표시될 수 있다.On the other hand, the evaluation interface provided to the evaluator in the step of providing the evaluation interface (S10) includes a script layer (L2) in which a script generated based on the answer image performed by the first evaluator is displayed; including, the script layer In (L2), when the evaluator selects a specific area of the script, a behavioral indicator list area A1 including one or more behavioral indicators corresponding to the corresponding question or the specific competency may be displayed.
구체적으로, 상기 스크립트레이어(L2)는 답변영상레이어(L1)에 표시되는 답변영상의 내용이 텍스트 형태로 변환된 스크립트를 표시한다. 상기 서버시스템(1000)은 답변영상의 음성정보를 텍스트정보로 변환하는 Speech to Text(STT)모듈을 포함하여, 상기 STT모듈을 통해 상기 답변영상에 대한 스크립트를 도출할 수 있다. 본 발명의 다른 실시예에서는 상기 서버시스템(1000)은 영상음성분리모듈을 더 포함하여 상기 영상음성분리모듈을 통해 상기 답변영상의 영상정보와 음성정보를 분리하고, 분리된 음성정보를 상기 STT모듈에 입력하여 스크립트를 도출할 수도 있다. 따라서 평가자는 답변영상레이어(L1)에서 재생되는 답변영상에서 명확하게 인지되지 않는 음성을 상기 스크립트레이어(L2)를 통해 텍스트 형태로 명확하게 파악할 수 있다.Specifically, the script layer (L2) displays a script in which the content of the answer image displayed on the answer image layer (L1) is converted into text form. The server system 1000 may include a Speech to Text (STT) module that converts audio information of an answer image into text information, and may derive a script for the answer image through the STT module. In another embodiment of the present invention, the server system 1000 further includes a video and audio separation module to separate the video information and audio information of the answer image through the video and audio separation module, and to separate the audio information from the STT module. You can also derive a script by inputting it into . Therefore, the evaluator can clearly grasp the voice that is not clearly recognized in the answer image reproduced in the answer image layer (L1) in the form of text through the script layer (L2).
또한, 상기 스크립트는 STT모듈뿐만 아니라 평가자가 상기 스크립트레이어(L2)상에서 해당 답변영상을 재생하여 직접 스크립트를 입력하여 생성되거나 혹은 상기 STT모듈에서 1차적으로 생성된 스크립트가 상기 스크립트레이어(L2)상에 표시되고, 평가자가 1차적으로 생성된 스크립트의 내용을 보정함으로써 상기 스크립트가 최종적으로 생성될 수 있다.In addition, the script is generated by not only the STT module but also the evaluator playing the corresponding answer image on the script layer (L2) and directly inputting the script, or the script primarily generated in the STT module is on the script layer (L2) is displayed, and the evaluator may finally generate the script by correcting the content of the primarily generated script.
한편, 상기 스크립트레이어(L2)에 표시되는 스크립트는 평가자가 수행하는 드래그와 같은 입력에 의해 스크립트의 특정 영역이 선택될 수 있고, 상기 스크립트레이어(L2)에는 스크립트의 특정 영역이 선택되는 경우에 해당 질문 혹은 평가하고자 하는 특정 역량과 관련된 1 이상의 행동지표를 포함하는 행동지표리스트영역(A1)이 표시된다. 상기 행동지표리스트영역(A1) 상에서 평가자는 자신이 선택한 스크립트의 특정 영역과 관련된 행동지표를 선택할 수 있고, 상기 선택된 스크립트의 특정 영역은 후술하는 행동지표레이어(L6)에 표시될 수 있다. 이에 대해서는 도 7에서 후술하도록 한다.On the other hand, in the script displayed on the script layer (L2), a specific area of the script may be selected by an input such as drag performed by an evaluator, and a specific area of the script is selected in the script layer (L2). A behavior indicator list area (A1) containing one or more behavior indicators related to a question or a specific competency to be evaluated is displayed. On the behavior indicator list area A1, the evaluator may select a behavior indicator related to a specific region of the script selected by the evaluator, and the specific region of the selected script may be displayed on a behavior indicator layer L6 to be described later. This will be described later with reference to FIG. 7 .
상기 평가인터페이스는 점수평가레이어(L3)를 포함하고, 상기 점수평가레이어(L3)는 평가자로 하여금 해당 답변영상에 대한 특정 역량의 종합적인 평가점수를 입력 받을 수 있다. 평가자가 상기 점수평가레이어(L3)에 표시된 평가점수 영역을 선택하는 경우, 기설정된 1 이상의 평가점수가 표시된다. 예를 들어, 상기 기설정된 1 이상의 평가점수는 1점 내지 5점의 범위에서 0.5점 간격으로 설정된 1 이상의 평가점수가 표시될 수 있다. 이후에 표시되는 1 이상의 평가점수 가운데 평가자가 특정 평가점수를 선택하는 경우(도 6에서는 '3점'), 해당 평가점수가 입력되어 상기 점수평가레이어(L3)에 표시될 수 있다.The evaluation interface includes a score evaluation layer L3, and the score evaluation layer L3 allows the evaluator to receive a comprehensive evaluation score of a specific competency for the corresponding answer image. When the evaluator selects the evaluation score area displayed on the score evaluation layer L3, one or more preset evaluation scores are displayed. For example, the preset one or more evaluation points may display one or more evaluation points set at 0.5 point intervals in a range of 1 to 5 points. When an evaluator selects a specific evaluation score from among one or more evaluation scores displayed later ('3 points' in FIG. 6 ), the corresponding evaluation score may be input and displayed on the score evaluation layer L3.
한편, 상기 평가인터페이스제공단계(S10)에서 상기 평가자에게 제공되는 평가인터페이스는, 상기 평가자로 하여금 상기 스크립트에서 해당 질문 혹은 상기 특정 역량에 상응하는 특정 행동지표가 관찰되지 않는 경우에 상기 특정 행동지표를 도출하기 위한 별도의 심층질문을 입력받는 심층질문레이어(L4); 및 상기 평가자로 하여금 상기 제1피평가자가 수행한 답변영상에 대한 특이사항을 입력받는 특이사항레이어(L5);를 포함할 수 있다.On the other hand, in the evaluation interface providing step (S10), the evaluation interface provided to the evaluator allows the evaluator to display the specific behavioral indicator when a specific behavioral indicator corresponding to the question or the specific competency is not observed in the script. a deep question layer (L4) that receives a separate in-depth question for deriving; and a singularity layer (L5) for allowing the evaluator to input specific details about the answer image performed by the first evaluated person.
구체적으로, 심층질문레이어(L4)는 평가자가 해당 답변영상에 대하여 평가하고자 하는 특정 역량에 대한 1 이상의 행동지표 가운데 특정 행동지표가 관찰되지 않았다고 판단하는 경우에, 해당 특정 행동지표를 관찰할 수 있는 답변을 이끌어내기 위한 심층질문을 평가자로부터 입력 받을 수 있다.Specifically, the in-depth question layer (L4) is a function for observing a specific behavioral indicator when the evaluator determines that a specific behavioral indicator is not observed among one or more behavioral indicators for a specific competency that the evaluator wants to evaluate with respect to the corresponding answer image. In-depth questions to elicit answers can be input from the evaluator.
본 발명의 다른 실시예에서는 상기 심층질문레이어(L4)는 상술한 심층질문 외에도 추가적으로 평가자가 제1피평가자에게 질문하고자 하는 내용들을 입력 받을 수도 있다.In another embodiment of the present invention, the in-depth question layer L4 may additionally receive the contents that the evaluator wants to ask the first evaluator in addition to the above-described in-depth question.
상기 특이사항레이어(L5)는 평가자로부터 상기 답변영상레이어(L1)에 표시되는 답변영상에 대한 특이사항을 입력 받을 수 있다. 예를 들어 평가자는 해당 답변영상에 대하여 '말끝을 흐리는 등 당황하는 모습을 보이는 것을 볼 때 답변 내용의 진실성이 의심스러움'과 같이 답변영상에 대한 특이사항을 상기 특이사항레이어(L5)에 입력할 수 있고, 이와 같이 입력된 특이사항은 상기 제1피평가자역량정보에 포함될 수 있다.The specific item layer (L5) may receive a specific item for the answer image displayed on the answer image layer (L1) from the evaluator. For example, the evaluator can input specific information about the response image into the specific information layer (L5), such as 'the truth of the response content is doubtful when seeing a person showing embarrassment such as blurring the end of speech' for the response image. In this way, the input specific information may be included in the first assessee competency information.
한편, 상기 평가인터페이스 상에서 평가자가 입력한 정보들은 상술한 제1피평가자역량정보에 포함될 수 있고, 상기 제1피평가자역량정보는 해당 제1피평가자에게 제공되고, 또한 학습데이터로써 상기 모델학습단계(S12)에서 기계학습모델을 학습시키기 위하여 사용될 수 있다.On the other hand, information input by the evaluator on the evaluation interface may be included in the above-described first subject competency information, the first evaluated capacity information is provided to the first evaluator, and the model learning step (S12) It can be used to train machine learning models in
또한, 상기 심층질문레이어(L4) 상에서 평가자가 입력한 심층질문은 피평가자의 답변영상에 따라 심층질문을 도출하는 심층질문추천모델의 학습데이터로 사용될 수 있으며, 더 구체적으로는 상기 심층질문레이어(L4) 상에서 평가자가 입력한 심층질문은 제2피평가자가 수행한 답변영상에서 관찰되지 않은 행동지표에 대한 제2피평가자의 답변을 이끌어내기 위한 질문에 해당하므로, 상기 평가자가 입력한 심층질문 및 관찰되지 않은 행동지표가 상기 심층질문추천모델의 학습데이터로 사용될 수도 있다.In addition, the deep question input by the evaluator on the deep question layer (L4) can be used as learning data of the deep question recommendation model that derives the deep question according to the answer image of the evaluator, and more specifically, the deep question layer (L4) ), the in-depth question input by the evaluator corresponds to a question to elicit the second evaluator's answer to the behavioral indicator that was not observed in the answer image performed by the second evaluator. Behavioral indicators may be used as learning data of the deep question recommendation model.
한편, 도 6에서 도시되지 않았으나, 본 발명의 다른 실시예에서 상기 평가인터페이스에는 전문가비교엘리먼트가 표시될 수 있다. 상기 전문가비교엘리먼트에 대해 상기 평가자가 선택입력을 수행하는 경우, 상기 평가인터페이스의 답변영상레이어(L1)에 표시되는 답변영상에 대해 본 발명의 평가방법에 대한 전문가가 평가를 수행한 내용들이 표시될 수 있다.Meanwhile, although not shown in FIG. 6 , an expert comparison element may be displayed on the evaluation interface in another embodiment of the present invention. When the evaluator makes a selection input for the expert comparison element, the contents of the evaluation performed by the expert on the evaluation method of the present invention on the reply image displayed on the reply image layer (L1) of the evaluation interface will be displayed. can
더 구체적으로는 상기 평가자가 입력한 점수평가레이어(L3) 상에서의 평가점수, 심층질문레이어(L4) 상에서의 심층질문, 특이사항레이어(L5) 상에서의 특이사항 및 행동지표레이어(L6) 상에서의 1 이상의 행동지표 각각에 대한 스크립트의 특정 영역에 상응하여 상기 전문가가 입력한 내용들이 표시될 수 있고, 이를 통해 평가자가 본 발명의 평가방법을 교육받는 주체에 해당하는 경우, 상기 평가자는 자신의 평가내용을 전문가의 평가내용과 비교할 수 있다.More specifically, the evaluation score on the score evaluation layer (L3) input by the evaluator, the deep question on the deep question layer (L4), the singularity on the singularity layer (L5), and the behavior index layer (L6) on the The contents input by the expert may be displayed corresponding to a specific area of the script for each of the one or more behavioral indicators. The contents can be compared with the evaluation contents of experts.
도 7은 본 발명의 일 실시예에 따른 스크립트레이어(L2)에서 평가자의 선택에 따라 행동지표레이어(L6)가 표시되는 구성을 개략적으로 도시한다.7 schematically shows a configuration in which the behavior indicator layer L6 is displayed according to the selection of the evaluator in the script layer L2 according to an embodiment of the present invention.
도 7에 도시된 바와 같이, 상기 평가인터페이스는, 상기 평가자가 선택한 상기 스크립트레이어(L2)에 표시된 스크립트의 특정 영역에 해당하는 텍스트가 표시되는 행동지표레이어(L6);를 더 포함할 수 있다.As shown in FIG. 7 , the evaluation interface may further include a behavior indicator layer L6 in which text corresponding to a specific area of the script displayed on the script layer L2 selected by the evaluator is displayed.
구체적으로, 상기 평가인터페이스는 행동지표레이어(L6)를 더 포함하고, 상기 행동지표레이어(L6)는 평가자가 선택한 스크립트의 특정 영역(B1)에 해당하는 텍스트가 표시될 수 있다. 더 구체적으로, 평가자가 상기 스크립트레이어(L2)에 표시되는 스크립트의 특정 영역을 선택(B1)하는 경우, 해당 질문 혹은 평가하고자 하는 역량에 상응하는 1 이상의 행동지표를 포함하는 행동지표리스트영역(A1)이 상기 스크립트레이어(L2) 상에 표시되고, 평가자가 상기 행동지표리스트영역(A1)에서 특정 행동지표(B2)를 선택하는 경우에 상기 행동지표레이어(L6)에 평가자가 선택한 스크립트의 특정 영역(B1)에 해당하는 텍스트가 표시될 수 있다.Specifically, the evaluation interface further includes a behavior indicator layer (L6), and the behavior indicator layer (L6) may display text corresponding to a specific area (B1) of the script selected by the evaluator. More specifically, when the evaluator selects (B1) a specific area of the script displayed on the script layer (L2), the action index list area (A1) including one or more action indicators corresponding to the question or the competency to be evaluated ) is displayed on the script layer L2, and when the evaluator selects a specific behavior indicator B2 from the behavior indicator list area A1, the specific region of the script selected by the evaluator on the behavior indicator layer L6 Text corresponding to (B1) may be displayed.
더 구체적으로, 상기 행동지표레이어(L6)는, 상기 평가자가 상기 선택된 스크립트의 특정 영역(B1)에 대하여 표시되는 행동지표리스트영역(A1)에서 특정 행동지표(B2)를 선택하는 경우, 상기 행동지표레이어(L6)에 표시된 상기 특정 행동지표(B2)에 상응하는 위치에서 상기 선택된 스크립트의 특정 영역(B1)에 해당하는 텍스트가 표시될 수 있다.More specifically, the behavior indicator layer (L6), when the evaluator selects a specific behavior indicator (B2) from the behavior indicator list region (A1) displayed for the specific region (B1) of the selected script, the behavior Text corresponding to the specific area B1 of the selected script may be displayed at a position corresponding to the specific behavior indicator B2 displayed on the indicator layer L6.
도 7에 도시된 바와 같이, 상기 스크립트레이어(L2)에 표시된 스크립트에서 평가자가 특정 영역(B1)을 선택하는 경우, 상기 스크립트레이어(L2) 상에는 행동지표리스트영역(A1)이 오버레이되고, 평가자는 자신이 선택한 스크립트의 특정 영역에 관련된 특정 행동지표(B2)를 상기 행동지표리스트영역(A1)에서 선택한다.7, when the evaluator selects a specific area B1 in the script displayed on the script layer L2, the behavior index list area A1 is overlaid on the script layer L2, and the evaluator A specific action index (B2) related to a specific area of the script selected by the user is selected from the action index list area (A1).
이와 같이, 평가자가 상기 행동지표리스트영역(A1)에서 특정 행동지표(B2)를 선택하는 경우, 상기 행동지표레이어(L6)에는 평가자가 선택한 특정 행동지표(B2) 및 상기 특정 행동지표에 상응하는 상기 스크립트의 특정 영역(B1)에 상응하는 텍스트가 표시된다. 도 7에서는 평가자가 상기 행동지표리스트영역(A1)에서 스크립트의 특정 영역에 상응하는 특정 행동지표(B2, '팀원들간 지식과 정보를 공유한다.')를 선택하는 경우, 상기 행동지표레이어(L6)에는 해당 행동지표('팀원들간 지식과 정보를 공유한다.') 및 이에 상응하는 스크립트의 특정 영역('실제 삼양사 업무를 진행할 때 사장님과 대리님이 하는 업무를 어깨너머 배워 익힐 수 있었고')이 표시된다.As such, when the evaluator selects a specific behavioral indicator B2 from the behavioral indicator list area A1, the behavioral indicator layer L6 includes the specific behavioral indicator B2 selected by the evaluator and the corresponding specific behavioral indicator. The text corresponding to the specific area B1 of the script is displayed. In Fig. 7, when the evaluator selects a specific behavior indicator (B2, 'sharing knowledge and information among team members.') corresponding to a specific region of the script in the behavior indicator list region A1, the behavior indicator layer L6 ), the corresponding behavioral indicator ('sharing knowledge and information among team members.') and a specific area of the corresponding script ('I was able to learn over the shoulder of the boss and the assistant manager when I was actually working at Samyang Corporation, and I was able to learn it') is displayed.
한편, 상기 행동지표레이어(L6)에는 답변영상과 관련된 질문 혹은 평가하고자 하는 역량과 상응하는 1 이상의 행동지표가 미리 표시되어 있고, 평가자가 상기 행동지표리스트영역(A1)에서 특정 행동지표를 선택하는 경우에, 상기 행동지표레이어(L6)에 미리 표시된 특정 행동지표에 상응하는 위치(도 7에서는 하단)에 평가자가 선택한 스크립트의 특정 영역의 텍스트가 표시될 수 있으나, 본 발명의 다른 실시예에서는, 상기 행동지표레이어(L6)에는 답변영상과 관련된 질문 혹은 평가하고자 하는 역량과 상응하는 1 이상의 행동지표가 미리 표시되어 있지 않고, 평가자가 상기 행동지표리스트영역(A1)에서 특정 행동지표를 선택하는 경우에, 상기 행동지표레이어(L6)에 특정 행동지표 및 평가자가 선택한 스크립트의 특정 영역의 텍스트가 함께 표시될 수 있다.On the other hand, in the behavior indicator layer (L6), one or more behavior indicators corresponding to a question related to an answer image or a competency to be evaluated are displayed in advance, and the evaluator selects a specific behavior indicator from the behavior indicator list area (A1). In this case, the text of a specific area of the script selected by the evaluator may be displayed at a position (bottom in FIG. 7) corresponding to the specific behavior indicator displayed in advance on the behavior indicator layer L6, but in another embodiment of the present invention, When one or more behavioral indicators corresponding to a question related to an answer image or a competency to be evaluated are not displayed in the behavioral indicator layer (L6) in advance, and the evaluator selects a specific behavioral indicator from the behavioral indicator list area (A1) , a specific behavior indicator and text of a specific region of a script selected by the evaluator may be displayed together on the behavior indicator layer L6.
따라서 이와 같은 구성을 통해 평가자는 스크립트 상에서 선택한 영역에 대해 편리하게 이에 매칭되는 행동지표를 선택할 수 있고, 선택한 행동지표 및 스크립트의 특정 영역이 상기 행동지표레이어(L6)에 별도로 표시되므로 기존에 평가자가 답변영상을 기반으로 하는 평가를 수행하는데 있어 직접 피평가자의 답변을 행동지표에 따라 구조화하는데 소요되는 시간을 절약할 수 있고, 평가를 보다 원활하게 수행할 수 있는 효과를 발휘할 수 있다.Therefore, through such a configuration, the evaluator can conveniently select a behavior index matching the selected area on the script, and since the selected action index and a specific area of the script are separately displayed on the behavior index layer (L6), the evaluator can In carrying out the evaluation based on the response image, the time required for structuring the directly evaluated responses according to the behavioral indicators can be saved, and the effect of performing the evaluation more smoothly can be exhibited.
도 8은 본 발명의 일 실시예에 따른 다른 형태의 평가인터페이스의 구성을 개략적으로 도시한다.8 schematically shows the configuration of another type of evaluation interface according to an embodiment of the present invention.
도 8에 도시된 바와 같이, 평가자에게 제공되는 평가인터페이스의 형태는 상술한 도 6에 도시된 형태에 한정되지 않고, 도 8에 도시된 형태 혹은 그 외의 형태로 구성되어 평가자에게 제공될 수도 있다.As shown in FIG. 8 , the form of the evaluation interface provided to the evaluator is not limited to the form shown in FIG. 6 , and may be configured in the form shown in FIG. 8 or other form and provided to the evaluator.
도 8에 도시된 평가인터페이스에서는 답변영상레이어(L10) 및 스크립트레이어(L11)가 평가인터페이스의 상단에 위치하여, 평가자가 답변영상레이어(L10)에 표시되는 답변영상 및 스크립트레이어(L11)에 표시되는 스크립트에 기초하여 각각의 행동지표에 대한 스크립트의 특정 내용을 선택할 수 있다. 한편, 스크립트레이어(L11) 상에서 평가자가 스크립트의 특정 영역을 선택하는 경우에 행동지표리스트영역(A10)이 상기 스크립트레이어(L11) 상에 오버레이될 수 있다.In the evaluation interface shown in FIG. 8, the answer image layer (L10) and the script layer (L11) are located at the top of the evaluation interface, and the evaluator is displayed on the answer image and script layer (L11) displayed on the answer image layer (L10) Based on the script to be used, the specific content of the script for each behavioral indicator can be selected. On the other hand, when the evaluator selects a specific area of the script on the script layer L11, the behavior index list area A10 may be overlaid on the script layer L11.
한편, 평가자가 입력하는 내용들은 상기 평가인터페이스에 하단에 입력할 수 있도록 구성된다. 따라서 평가자는 평가인터페이스의 상단에 위치한 피평가자의 답변영상에 대한 내용들을 확인하고, 평가인터페이스의 하단 영역에서 답변영상에 대한 내용들을 입력할 수 있도록 구성되어, 도 8에 도시된 평가인터페이스의 형태는 도 6의 구성에 비해 사용자 경험(User Experience)을 고려하여 디자인된 평가인터페이스 형태에 해당할 수 있다.Meanwhile, the content input by the evaluator is configured to be inputted at the bottom of the evaluation interface. Therefore, the evaluator is configured to check the contents of the respondent's response image located at the top of the evaluation interface and input the contents of the answer image in the lower area of the evaluation interface. The form of the evaluation interface shown in FIG. Compared to the configuration of 6, it may correspond to an evaluation interface designed in consideration of user experience.
구체적으로, 상기 평가인터페이스의 좌측 하단에는 행동지표레이어(L12)가 위치하여, 평가자가 스크립트레이어(L11) 상에서 입력한 행동지표별 스크립트의 내용들이 표시될 수 있다. 또한, 상기 평가인터페이스의 우측 하단에는 심층질문레이어(L13), 특이사항레이어(L14) 및 점수평가레이어(L15)가 순차적으로 배치되어 평가자가 심층질문 및 특이사항을 입력한 후에 마지막으로 해당 답변영상에 대한 평가점수를 상기 점수평가레이어(L15) 상에 입력할 수 있고, 영역 A11에서는 상기 점수평가레이어(L15) 상에서 입력한 평가점수가 표시될 수 있다.Specifically, the behavior indicator layer L12 is located at the lower left of the evaluation interface, so that the contents of the script for each behavior indicator input by the evaluator on the script layer L11 can be displayed. In addition, in the lower right corner of the evaluation interface, an in-depth question layer (L13), a singularity layer (L14), and a score evaluation layer (L15) are sequentially arranged, and after the evaluator enters an in-depth question and a specific matter, finally the corresponding answer image An evaluation score may be input on the score evaluation layer L15, and the evaluation score input on the score evaluation layer L15 may be displayed in the area A11.
도 9는 본 발명의 일 실시예에 따른 모델학습단계(S12)에 따라 기계학습모델이 학습되는 과정을 개략적으로 도시한다.9 schematically illustrates a process of learning a machine learning model according to the model learning step (S12) according to an embodiment of the present invention.
도 9에 도시된 바와 같이, 상기 서버시스템(1000)에 포함된 기계학습모델에 대하여 상기 모델학습부(1300)는 상술한 제1피평가자역량정보에 기초하여 상기 기계학습모델을 학습시켜 강화된 기계학습모델로 업데이트하는 모델학습단계(S12)를 수행한다.As shown in FIG. 9 , with respect to the machine learning model included in the server system 1000 , the model learning unit 1300 learns the machine learning model based on the above-described first subject competency information and is a reinforced machine. A model learning step (S12) of updating the learning model is performed.
구체적으로, 상기 기계학습모델은 상기 모델학습단계(S12)를 통해 상기 제1피평가자역량정보에 포함된 상기 특정 역량에 상응하는 1 이상의 행동지표 및 각각의 행동지표에 대하여 상기 평가자가 선택한 스크립트의 특정 영역을 입력 받아 학습할 수 있다.Specifically, in the machine learning model, through the model learning step (S12), one or more behavior indicators corresponding to the specific competency included in the first evaluator competency information and the specific script selected by the evaluator for each behavior indicator You can learn by inputting an area.
바람직하게 기계학습모델은 특정 역량에 대한 평가를 수행할 수 있고, 따라서 상기 서버시스템(1000)에는 역량 별로 1 이상의 기계학습모델을 포함할 수 있다. 한편, 상기 기계학습모델은 특정 역량에 대한 평가를 수행하므로 해당 특정 역량에 대한 제1피평가자역량정보 즉, 해당 특정 역량에 대하여 제1피평가자가 수행한 답변영상에 대해 평가자가 입력한 제1피평가자역량정보만을 해당 기계학습모델의 학습을 위한 학습데이터로 사용하거나, 혹은 상기 모델학습단계(S12)에서는 복수의 역량 각각에 대한 제1피평가자역량정보에 대하여 라벨링을 수행하고, 라벨링된 제1피평가자역량정보를 학습데이터로 사용하여, 해당 기계학습모델이 평가하는 특정 역량에 대한 제1피평가자역량정보가 아닌 타 역량에 대한 제1피평가자역량정보를 학습데이터로 사용할 수도 있다.Preferably, the machine learning model may perform evaluation on a specific capability, and thus, the server system 1000 may include one or more machine learning models for each capability. On the other hand, since the machine learning model evaluates a specific competency, the first evaluator competency information for the specific competency, that is, the first evaluator competency input by the evaluator to the response image performed by the first evaluator for the specific competency Only the information is used as learning data for learning the machine learning model, or in the model learning step ( S12 ), labeling is performed on the first subject competency information for each of a plurality of competencies, and the labeled first evaluator competency information is used as the learning data, and the first evaluator competency information for other competencies rather than the first evaluated competency information for the specific competency evaluated by the machine learning model may be used as the learning data.
더 구체적으로 모델학습단계(S12)에서는 제1피평가자역량정보에 포함된 특정 역량에 상응하는 1 이상의 행동지표 및 상기 1 이상의 행동지표 각각에 대하여 평가자가 스크립트에서 선택한 특정 영역을 상기 기계학습모델에 입력하여 학습시키고, 역량정보도출단계(S14)에서 학습된 기계학습모델을 통해 각각의 행동지표 별로 발견확률정보를 포함하는 제2피평가자역량정보를 도출할 수 있다.More specifically, in the model learning step (S12), one or more behavioral indicators corresponding to the specific competency included in the first evaluator competency information and a specific area selected by the evaluator in the script for each of the one or more behavioral indicators are input to the machine learning model through the machine learning model learned in the capability information derivation step (S14), it is possible to derive the second assessee competency information including the discovery probability information for each behavioral indicator.
또한, 상기 모델학습단계(S12)는 제1피평가자역량정보에 상응하는 답변영상을 추가적인 학습데이터로 하여 상기 기계학습모델을 학습시킬 수 있고, 이에 따라 학습된 기계학습모델은 답변영상에서의 피평가자의 표정 및 감정 등을 분석하여 평가를 수행할 수도 있다.In addition, in the model learning step (S12), the machine learning model can be trained by using the answer image corresponding to the first evaluator competency information as additional learning data, and the machine learning model learned according to this It is also possible to perform evaluation by analyzing facial expressions and emotions.
한편, 본 발명의 다른 실시예에서는 모델학습단계(S12)에서는 제1피평가자역량정보에 포함된 상기 특정 역량에 상응하는 1 이상의 행동지표만을 학습데이터로 하여 상기 기계학습모델을 학습시키고, 추가적으로 상기 제1피평가자역량정보에 포함된 평가점수 혹은 상기 제1피평가자역량정보에 포함된 각각의 행동지표에 대하여 상기 평가자가 선택한 스크립트의 특정 영역을 추가적인 학습데이터로 하여 상기 기계학습모델을 학습시킬 수도 있다.On the other hand, in another embodiment of the present invention, in the model learning step (S12), the machine learning model is learned using only one or more behavioral indicators corresponding to the specific competency included in the first evaluator competency information as learning data, and additionally, the second The machine learning model may be trained using, as additional learning data, a specific area of the script selected by the evaluator with respect to the evaluation score included in the first evaluator competency information or each behavioral index included in the first evaluator competency information.
또한, 상기 모델학습부(1300)는 상술한 심층질문추천모델을 학습시킬 수 있으며, 구체적으로 상기 모델학습부(1300)는 제1피평가자역량정보에 포함된 평가자가 입력한 심층질문을 학습데이터로 하여 상기 심층질문추천모델을 학습시킬 수 있다.In addition, the model learning unit 1300 may learn the above-described deep question recommendation model, and specifically, the model learning unit 1300 uses the deep question input by the evaluator included in the first evaluator competency information as learning data. Thus, the deep question recommendation model can be trained.
2. 기계학습모델을 이용하여 면접영상에 대한 자동화된 평가를 위한 심층질문을 도출하는 방법2. Method of deriving deep questions for automated evaluation of interview images using machine learning model
상술한 '1. 기계학습모델을 이용하여 면접영상에 대한 자동화된 평가를 제공하는 방법'에서는 제2피평가자가 수행한 답변영상에 기초하여 제2피평가자역량정보를 도출하기 위한 개략적인 방법에 대해 설명하였다.'1. In 'Method of providing automated evaluation of interview images using machine learning model', a schematic method for deriving the second evaluator's competency information based on the answer image performed by the second evaluator was described.
이하에서는 제2피평가자역량정보를 도출하기 위한 구체적인 방법으로써, 제2피평가자역량정보를 도출하기 위하여 제2피평가자가 수행한 답변영상에 따라 심층질문을 설정하고, 설정된 심층질문에 대한 제2피평가자가 수행한 답변영상을 추가적으로 고려하여 평가결과를 도출하는 방법에 대해 설명하도록 한다.Hereinafter, as a specific method for deriving the second evaluator competency information, an in-depth question is set according to the answer image performed by the second evaluator in order to derive the second evaluated competency information, and the second evaluator performs the set in-depth question The method of deriving the evaluation result by considering one answer image additionally will be explained.
한편, 상술한 '1. 기계학습모델을 이용하여 면접영상에 대한 자동화된 평가를 제공하는 방법'에서는 평가자가 직접 평가하는 것과 기계학습모델을 통해 서버시스템에서 평가하는 것을 용이하게 구분하기 위하여 제1피평가자 및 제2피평가자, 제1피평가자역량정보 및 제2피평가자역량정보로 구분하여 기재하였으나, 이하에서는 서버시스템에서 답변영상에 기초하여 평가하는 것을 중심으로 설명하므로 이하에 기재된 평가자는 상술한 제2피평가자에 해당할 수 있고, 마찬가지로 이하에 기재된 종합평가정보는 상술한 제2피평가자역량정보 혹은 종합피평가자역량정보에 해당할 수 있다.On the other hand, the above-mentioned '1. In the 'Method of providing automated evaluation of interview images using a machine learning model', in order to easily distinguish between the direct evaluation by the evaluator and the evaluation in the server system through the machine learning model, the first and second evaluators Although it has been described separately as 1st evaluator competency information and 2nd evaluator competency information, the following describes the evaluation based on the answer image in the server system, so the evaluator described below may correspond to the above-mentioned second evaluator, The comprehensive evaluation information described below may correspond to the above-described second evaluator competency information or comprehensive evaluator capacity information.
도 10은 본 발명의 일 실시예에 따른 역량정보도출부(1500)의 세부 구성을 개략적으로 도시한다.10 schematically shows a detailed configuration of the capability information derivation unit 1500 according to an embodiment of the present invention.
도 10에 도시된 바와 같이, 피평가자가 수행한 답변영상에 기초하여 심층질문을 설정하고, 상기 심층질문에 대하여 피평가자가 수행한 답변영상에 따라 종합평가정보를 도출하기 위한 단계들은 상기 역량정보도출부(1500)에서 수행될 수 있다.As shown in FIG. 10, the steps for setting an in-depth question based on the answer image performed by the evaluator and deriving the comprehensive evaluation information according to the answer image performed by the evaluator for the in-depth question are the capability information derivation unit (1500).
구체적으로, 상기 역량정보도출부(1500)는 일반질문부(1510)를 포함하고, 상기 일반질문부(1510)는 특정 역량을 평가하기 위하여 1차적으로 해당 피평가자에게 상기 특정 역량에 대한 1 이상의 질문을 제공하는 제1질문제공부(1511) 및 상기 제1질문제공부(1511)에서 제공하는 1 이상의 질문에 따라 해당 피평가자가 수행한 답변영상에 기초하여 제1출력정보를 도출하는 제1출력정보도출부(1512)를 포함한다.Specifically, the competency information derivation unit 1500 includes a general questioning unit 1510, and the general questioning unit 1510 primarily asks the evaluator one or more questions about the specific competency in order to evaluate the specific competency. First output information for deriving first output information based on the answer image performed by the subject according to one or more questions provided by the first question study 1511 and the first question study 1511 that provides and a derivation portion 1512 .
상기 제1질문제공부(1511)는 피평가자가 해당 피평가자단말기를 통해 특정 역량에 대한 평가, 해당 피평가자가 지원하고자 하는 기업에 대한 면접 평가 혹은 해당 피평가자가 지원하고자 하는 기업의 직무에 대한 면접 평가를 요청하는 경우에, 해당 특정 역량, 지원하고자 하는 기업에 상응하는 역량 혹은 지원하고자 하는 기업의 직무에 상응하는 역량에 대한 1 이상의 질문을 제공한다. 상기 1 이상의 질문은 특정 역량과 관련된 1 이상의 행동지표에 대하여 피평가자의 답변에서 상기 1 이상의 행동지표가 관찰될 수 있도록 설계된 질문에 해당할 수 있다.In the first question study (1511), the person to be evaluated makes a request for evaluation of a specific competency through the terminal of the evaluated person, an interview evaluation for a company that the evaluated person wants to apply for, or an interview evaluation for a job at a company that the evaluated person wants to apply for. , provide one or more questions about the specific competencies, competencies commensurate with the company you are applying for, or competencies commensurate with the job of the company you are applying for. The one or more questions may correspond to a question designed so that the one or more behavioral indicators can be observed in the respondent's answer to one or more behavioral indicators related to a specific competency.
피평가자는 해당 피평가자단말기를 통해 상기 제1질문제공부(1511)에서 제공하는 1 이상의 질문을 제공받을 수 있으며, 상기 피평가자단말기를 통해 상기 1 이상의 질문에 대한 답변영상을 생성할 수 있다. 이후 상기 피평가자단말기는 상기 생성된 답변영상을 상기 서버시스템(1000)으로 송신한다.The person to be evaluated may be provided with one or more questions provided by the first question-and-question study 1511 through the terminal of the person to be evaluated, and an image of answers to the one or more questions may be generated through the terminal of the person to be evaluated. Thereafter, the evaluator terminal transmits the generated answer image to the server system 1000 .
상기 제1출력정보도출부(1512)는 서버시스템(1000)에서 수신한 피평가자가 수행한 답변영상을 상술한 기계학습모델에 입력하여 제1출력정보를 도출한다. 더 구체적으로 상기 제1출력정보는 기계학습모델을 통해 상기 피평가자가 수행한 답변영상을 토대로 상기 특정 역량에 대한 평가정보 및 해당 평가정보와 관련된 도출행동지표를 포함할 수 있다.The first output information derivation unit 1512 derives first output information by inputting the image of the answer performed by the evaluator received from the server system 1000 into the above-described machine learning model. More specifically, the first output information may include evaluation information for the specific competency based on the answer image performed by the evaluated person through a machine learning model and a derived behavioral indicator related to the evaluation information.
한편, 상기 역량정보도출부(1500)는 심층질문설정부(1520)를 더 포함할 수 있고, 심층질문설정부(1520)는 상기 제1출력정보도출부(1512)에서 도출된 제1출력정보에 기초하여 해당 피평가자에게 제공될 심층질문을 도출한다.On the other hand, the capability information derivation unit 1500 may further include an in-depth question setting unit 1520, and the in-depth question setting unit 1520 is the first output information derived from the first output information derivation unit 1512. Based on this, in-depth questions to be provided to the subject to be evaluated are derived.
구체적으로, 상기 심층질문설정부(1520)는 상기 특정 역량에 상응하는 복수의 행동지표에서 상기 제1출력정보에 포함된 도출행동지표에 해당하지 않는 행동지표에 대하여 해당 피평가자로 하여금 상기 해당하지 않는 행동지표와 관련된 답변을 이끌어낼 수 있는 심층질문을 도출할 수 있다.Specifically, the in-depth question setting unit 1520 allows the subject to be evaluated for a behavioral indicator that does not correspond to the derived behavioral indicator included in the first output information in the plurality of behavioral indicators corresponding to the specific capability. In-depth questions that can lead to answers related to behavioral indicators can be drawn.
이를 위하여, 상기 심층질문설정부(1520)는 서버시스템(1000)에 기설정된 특정 역량에 대한 복수의 행동지표와 관련된 1 이상의 질문에서 상기 해당하지 않는 행동지표와 관련된 질문을 심층질문으로 도출할 수 있으나, 규칙기반 혹은 기계학습된 심층질문추천모델을 사용하여 심층질문을 도출할 수도 있다.To this end, the in-depth question setting unit 1520 may derive a question related to a behavioral indicator that does not correspond to an in-depth question from one or more questions related to a plurality of behavioral indicators for a specific capability preset in the server system 1000. However, it is also possible to derive deep questions using a rule-based or machine-learned deep question recommendation model.
마지막으로, 상기 역량정보도출부(1500)는 역량평가부(1530)를 더 포함할 수 있고, 상기 역량평가부(1530)는 상기 심층질문설정부(1520)에서 도출된 심층질문을 해당 피평가자에게 제공하고, 심층질문에 대하여 피평가자가 수행한 답변영상에 기초하여 최종적으로 특정 역량에 대한 평가를 수행한다.Finally, the competency information derivation unit 1500 may further include a competency evaluation unit 1530, and the competency evaluation unit 1530 provides the in-depth question derived from the in-depth question setting unit 1520 to the subject to be evaluated. In addition, the evaluation of specific competencies is finally performed on the basis of the video answers performed by the evaluator to the in-depth questions.
구체적으로, 상기 역량평가부(1530)는 심층질문부(1540) 및 종합평가정보도출부(1550)를 포함하고, 상기 심층질문부(1540)는 상기 심층질문설정부(1520)에서 도출된 1 이상의 심층질문을 해당 피평가자에게 제공하는 제2질문제공부(1541) 및 상기 제2질문제공부(1541)에서 제공하는 1 이상의 심층질문에 따라 해당 피평가자가 수행한 답변영상에 기초하여 제2출력정보를 도출하는 제2출력정보도출부(1542)를 포함한다.Specifically, the competency evaluation unit 1530 includes an in-depth questioning unit 1540 and a comprehensive evaluation information derivation unit 1550, and the in-depth questioning unit 1540 is one derived from the in-depth question setting unit 1520. The second output information based on the answer image performed by the person to be evaluated according to the second question study 1541 and the second question study 1541 for providing the above in-depth questions to the person to be evaluated and at least one in-depth question provided by the second question study 1541 and a second output information derivation unit 1542 for deriving .
한편, 본 발명의 다른 실시예에서는 상기 제1질문제공부(1511) 및 상기 제2질문제공부(1541)는 상술한 서버시스템(1000)의 질문제공부(1400)에 포함되어 특정 역량에 대한 질문 및 심층질문을 해당 피평가자에게 제공할 수도 있다.On the other hand, in another embodiment of the present invention, the first question study unit 1511 and the second question study unit 1541 are included in the question provision unit 1400 of the above-described server system 1000 to provide information on specific capabilities. Questions and in-depth questions may also be provided to the appraiseee.
피평가자는 해당 피평가자단말기를 통해 상기 제2질문제공부(1541)에서 제공하는 1 이상의 심층질문을 제공받을 수 있으며, 상기 피평가자단말기를 통해 상기 1 이상의 심층질문에 대한 답변영상을 생성할 수 있다. 이후 상기 피평가자단말기는 상기 생성된 1 이상의 심층질문에 대한 답변영상을 상기 서버시스템(1000)으로 송신한다.The evaluator may be provided with one or more in-depth questions provided by the second question-and-question study 1541 through the subject terminal, and may generate an answer image to the one or more in-depth questions through the evaluator terminal. Thereafter, the evaluator terminal transmits an image answering the generated one or more in-depth questions to the server system 1000 .
상기 제2출력정보도출부(1542)는 서버시스템(1000)에서 수신한 1 이상의 심층질문에 대하여 피평가자가 수행한 답변영상을 상기 기계학습모델에 입력하여 제2출력정보를 도출한다. 상기 제1출력정보와 마찬가지로 상기 제2출력정보는 기계학습모델을 통해 1 이상의 심층질문에 대하여 피평가자가 수행한 답변영상을 토대로 상기 특정 역량에 대한 평가정보 및 해당 평가정보와 관련된 도출행동지표를 포함할 수 있다.The second output information derivation unit 1542 derives second output information by inputting an image of an answer performed by an evaluator to one or more in-depth questions received from the server system 1000 into the machine learning model. Like the first output information, the second output information includes the evaluation information for the specific competency based on the image of the answer performed by the evaluator to one or more in-depth questions through the machine learning model and the derived behavioral indicators related to the evaluation information can do.
상기 종합평가정보도출부(1550)는 상기 제1출력정보도출부(1512)에서 도출한 제1출력정보 및 상기 제2출력정보도출부(1542)에서 도출한 제2출력정보에 기초하여 종합평가정보를 도출하며, 상기 종합평가정보는 상술한 제2피평가자역량정보 혹은 종합피평가자역량정보에 해당할 수 있다.The comprehensive evaluation information derivation unit 1550 performs a comprehensive evaluation based on the first output information derived from the first output information derivation unit 1512 and the second output information derived from the second output information derivation unit 1542 . information is derived, and the comprehensive evaluation information may correspond to the above-described second subject competency information or comprehensive subject capacity information.
상술한 '1. 기계학습모델을 이용하여 면접영상에 대한 자동화된 평가를 제공하는 방법'에서의 역량정보도출부(1500)에서는 단순히 피평가자가 수행한 답변영상에 기초하여 피평가자역량정보를 도출하는 반면에, 본 구성에서는 피평가자가 1차적으로 수행한 답변영상에 따라 심층질문을 도출하고, 해당 심층질문에 대하여 피평가자가 수행한 답변영상을 더 고려하여 평가를 수행하므로, 더욱 신뢰성 높은 행동기반면접 기반의 평가를 수행할 수 있다.'1. The competency information derivation unit 1500 in 'Method of providing automated evaluation of interview images using machine learning model' derives the subject's competency information based on the answer image performed by the evaluator, whereas in this configuration, A more reliable behavior-based interview-based evaluation can be performed because in-depth questions are derived according to the response image performed primarily by the evaluator, and the evaluation is performed by further considering the answer image performed by the evaluator for the in-depth question. have.
도 11은 본 발명의 일 실시예에 따른 서버시스템(1000)에서 수행하는 면접영상에 대한 자동화된 평가를 위한 심층질문을 도출하는 방법을 개략적으로 도시한다.11 schematically illustrates a method of deriving an in-depth question for automated evaluation of an interview image performed by the server system 1000 according to an embodiment of the present invention.
도 11에 도시된 바와 같이, 서버시스템(1000)에서 수행되는 행동지표에 기반한 피평가자의 자동화된 평가방법으로서, 상기 서버시스템(1000)에는 특정 역량에 대하여 복수의 행동지표 및 복수의 질문이 기설정되어 있고, 상기 복수의 행동지표 각각은 상기 복수의 질문 중 1 이상과 연관성을 갖는 것을 특징으로 하고, 상기 자동화된 평가방법은, 상기 특정 역량에 대한 평가를 수행하기 위한 기설정된 질문 중 1 이상을 피평가자에게 제공하는 제1질문제공단계(S20) 및 상기 제1질문제공단계(S20)에서 제공하는 1 이상의 질문에 대하여 상기 피평가자가 수행한 답변영상을 기계학습모델에 입력하여 상기 피평가자의 상기 특정 역량에 대한 평가정보 및 상기 평가정보와 관련된 도출행동지표를 포함하는 제1출력정보를 도출하는 제1출력정보도출단계(S21)를 포함하는 일반질문단계; 상기 일반질문단계가 1회 이상 수행된 후에, 도출된 1 이상의 상기 도출행동지표에 기초하여 1 이상의 심층질문을 설정하는 심층질문설정단계(S22); 및 상기 심층질문에 대하여 상기 피평가자 수행한 답변영상 및 상기 제1출력정보도출단계(S21)에서 도출된 제1출력정보에 기초하여 상기 특정 역량에 대한 평가를 수행하는 역량평가단계;를 포함할 수 있다.As shown in FIG. 11 , as an automated evaluation method of an evaluator based on a behavioral indicator performed in the server system 1000 , a plurality of behavioral indicators and a plurality of questions are preset for a specific capability in the server system 1000 . and each of the plurality of behavioral indicators is characterized in that it has a correlation with at least one of the plurality of questions, and the automated evaluation method answers at least one of the preset questions for performing the evaluation of the specific competency. The first question providing step (S20) provided to the assessee and the response image performed by the assessee to one or more questions provided in the first question providing step (S20) are input into the machine learning model to input the specific competency of the assessee a general question step including a first output information deriving step (S21) of deriving first output information including evaluation information for and a derived behavioral indicator related to the evaluation information; After the general question step is performed one or more times, an in-depth question setting step (S22) of setting one or more in-depth questions based on the derived one or more derived behavior indicators; and a competency evaluation step of performing evaluation of the specific competency based on the image of the answer performed by the evaluator to the in-depth question and the first output information derived in the first output information derivation step (S21). have.
구체적으로, 답변영상에 대한 자동화된 평가를 위한 심층질문을 도출하는 방법을 수행하기 위하여 서버시스템(1000)은 피평가자의 요청에 따라 특정 역량에 대한 평가를 수행하기 위한 1 이상의 질문을 해당 피평가자에게 제공하는 제1질문제공단계(S20)를 수행한다. 도 3에서 기재한 바와 같이 상기 서버시스템(1000)에는 역량 별로 각각의 역량과 관련된 복수의 행동지표 및 복수의 질문이 기설정되어 있으며, 상기 복수의 행동지표 각각은 상기 복수의 질문 중 1 이상과 연관성을 갖는 것을 특징으로 한다.Specifically, in order to perform a method of deriving an in-depth question for an automated evaluation of an answer image, the server system 1000 provides one or more questions for performing an evaluation of a specific competency according to the request of the evaluator to the subject to be evaluated. A first question-providing step (S20) is performed. As described in FIG. 3 , a plurality of behavior indicators and a plurality of questions related to each capability are preset in the server system 1000 for each capability, and each of the plurality of behavior indicators includes at least one of the plurality of questions and It is characterized by having a correlation.
상기 제1질문제공단계(S20)를 통해 평가하고자 하는 역량에 대한 1 이상의 질문을 해당 피평가자단말기에 제공하여 피평가자로 하여금 상기 1 이상의 질문에 대한 답변영상을 생성할 수 있다. 이와 같이 생성된 피평가자가 수행한 답변영상은 서버시스템(1000)으로 송신되어 DB(1600)에 저장될 수 있다.Through the first question providing step (S20), one or more questions about the competency to be evaluated may be provided to the corresponding terminal to be evaluated, so that the person to be evaluated may generate an image answering the one or more questions. The generated answer image performed by the evaluator may be transmitted to the server system 1000 and stored in the DB 1600 .
한편, 상기 제1출력정보도출단계(S21)는 상기 피평가자가 수행한 답변영상을 기계학습모델에 입력하여 제1출력정보를 도출한다. 상기 제1출력정보는 평가하고자 하는 특정 역량에 대한 평가정보 및 상기 평가정보와 관련된 도출행동지표를 포함한다. 상기 평가정보는 해당 답변영상에서 상기 특정 역량과 관련된 각각의 행동지표에 대한 발견확률정보 및 각각의 행동지표와 관련된 답변영상의 특정 내용에 대한 텍스트정보를 포함할 수 있다. 또한, 상기 도출행동지표는 평가하고자 하는 특정 역량과 관련된 복수의 행동지표 가운데 상기 답변영상의 내용에서 관찰되는 행동지표로써, 바람직하게는 상기 각각의 행동지표에 대한 발견확률정보가 소정의 값을 초과하는 행동지표를 도출행동지표로 도출할 수도 있다.On the other hand, the first output information deriving step (S21) derives the first output information by inputting the image of the answer performed by the evaluated person to the machine learning model. The first output information includes evaluation information for a specific competency to be evaluated and a derived behavioral index related to the evaluation information. The evaluation information may include discovery probability information for each behavioral indicator related to the specific capability in the corresponding response image and text information about specific content of the response image related to each behavioral indicator. In addition, the derived behavioral indicator is a behavioral indicator observed in the content of the answer image among a plurality of behavioral indicators related to a specific capability to be evaluated, and preferably, the discovery probability information for each behavioral indicator exceeds a predetermined value. It is also possible to derive a behavioral indicator that is used as a derived behavioral indicator.
본 발명의 다른 실시예에서는, 상기 제1질문제공단계(S20) 및 상기 제1출력정보도출단계(S21)를 포함하는 일반질문단계는 복수의 횟수로 반복 수행될 수 있다. 예를 들어, 평가하고자 하는 특정 역량에 대한 질문이 복수인 경우에 상기 일반질문단계는 복수의 질문 개수만큼 반복적으로 수행하여, 각각의 질문 별로 제1출력정보를 도출할 수도 있다.In another embodiment of the present invention, the general question step including the step of providing the first question (S20) and the step of deriving the first output information (S21) may be repeatedly performed a plurality of times. For example, when there are a plurality of questions about a specific competency to be evaluated, the general question step may be repeatedly performed as many as the plurality of questions to derive first output information for each question.
본 발명의 또 다른 실시예에서는, 상기 제1질문제공단계(S20)는 복수의 질문을 한번에 상기 피평가자에게 제공하고, 상기 제1출력정보도출단계(S21)는 각각의 질문에 대한 답변영상에 대하여 제1출력정보를 각각 도출하기 위하여 복수의 회수만큼 수행할 수도 있다.In another embodiment of the present invention, the first question providing step (S20) provides a plurality of questions to the assessee at once, and the first output information derivation step (S21) is for an answer image to each question. In order to derive each of the first output information, it may be performed a plurality of times.
상기 심층질문설정단계(S22)는 상기 제1출력정보도출단계(S21)에서 도출된 1 이상의 제1출력정보에 포함된 1 이상의 도출행동지표에 기초하여 1 이상의 심층질문을 도출한다. 더 구체적으로 상기 심층질문설정단계(S22)는 평가하고자 하는 특정 역량에 상응하는 복수의 행동지표 가운데 상기 1 이상의 도출행동지표에 포함되지 않는 행동지표에 대한 피평가자의 답변을 이끌어내기 위한 1 이상의 심층질문을 도출하며, 상기 1 이상의 심층질문을 도출하기 위하여 해당 특정 역량에 대하여 기설정된 1 이상의 질문 가운데 상기 도출행동지표에 포함되지 않는 행동지표와 관련된 질문을 심층질문으로 도출하거나, 혹은 규칙기반 또는 기계학습된 모델을 통해 심층질문을 도출할 수도 있다.The deep question setting step (S22) derives one or more deep questions based on one or more derived behavior indicators included in the one or more first output information derived in the first output information derivation step (S21). More specifically, the in-depth question setting step (S22) includes one or more in-depth questions for eliciting an answer from the evaluator to a behavioral indicator that is not included in the one or more derived behavioral indicators among a plurality of behavioral indicators corresponding to a specific competency to be evaluated. In order to derive the one or more in-depth questions, among one or more preset questions for the specific competency, a question related to a behavioral indicator not included in the derived behavioral indicator is derived as an in-depth question, or a rule-based or machine learning In-depth questions can also be derived from the model.
한편, 상기 역량평가단계는, 상기 심층질문설정단계(S22)에서 설정된 심층질문 중 1 이상을 상기 피평가자에게 제공하는 제2질문제공단계(S23) 및 상기 제2질문제공단계(S23)에서 제공된 1 이상의 심층질문에 대하여 상기 피평가자가 수행한 답변영상을 상기 기계학습모델에 입력하여 상기 피평가자의 상기 특정 역량에 대한 평가정보 및 상기 평가정보와 관련된 도출행동지표를 포함하는 제2출력정보를 도출하는 제2출력정보도출단계(S24)를 포함하는 심층질문단계; 및 상기 제1출력정보 및 상기 제2출력정보에 기초하여 상기 피평가자의 상기 특정 역량에 대한 종합평가정보를 도출하는 종합평가정보도출단계(S25);를 포함할 수 있다.On the other hand, in the competency evaluation step, the second question providing step (S23) and the second question providing step (S23) of providing one or more of the in-depth questions set in the deep question setting step (S22) to the assessee A method of deriving second output information including evaluation information on the specific competency of the evaluated person and a derived behavioral indicator related to the evaluation information by inputting the image of the answer performed by the evaluated person to the above in-depth question into the machine learning model 2 in-depth question step including the output information derivation step (S24); and a comprehensive evaluation information deriving step (S25) of deriving comprehensive evaluation information for the specific competency of the evaluated person based on the first output information and the second output information.
구체적으로, 제2질문제공단계(S23)는 상기 심층질문설정단계(S22)에서 도출된 1 이상의 심층질문을 해당 피평가자단말기로 송신하여 피평가자에게 제공하고, 해당 피평가자는 상기 제2질문제공단계(S23)로부터 제공받은 1 이상의 심층질문에 대한 답변영상을 해당 피평가자단말기를 통해 생성할 수 있다. 상기 피평가자단말기는 생성된 1 이상의 심층질문에 대한 피평가자가 수행한 답변영상을 상기 서버시스템(1000)으로 송신하고, 상기 서버시스템(1000)은 해당 답변영상을 수신하여 DB(1600)에 저장할 수 있다.Specifically, in the second question providing step (S23), one or more in-depth questions derived in the in-depth question setting step (S22) are transmitted to the corresponding subject terminal and provided to the subject, and the subject is provided with the second question providing step (S23) ) can generate an answer image to one or more in-depth questions provided by the evaluator terminal. The evaluator terminal transmits an image of an answer performed by the evaluator to the generated one or more in-depth questions to the server system 1000, and the server system 1000 may receive the answer image and store it in the DB 1600. .
한편, 상기 제2출력정보도출단계(S24)는 상기 1 이상의 심층질문에 대하여 피평가자가 수행한 답변영상을 기계학습모델에 입력하여 제2출력정보를 도출한다. 상기 제2출력정보도출단계(S24)에서 입력하는 기계학습모델은 상술한 제1출력정보도출단계(S21)에서의 기계학습모델과 동일할 수 있다. 상기 제2출력정보도출단계(S24)에서 도출하는 제2출력정보의 구성은 상술한 제1출력정보도출단계(S21)에서 도출하는 제1출력정보의 구성과 동일하나, 제2출력정보는 제1출력정보에 기초하여 도출된 1 이상의 심층질문에 대하여 피평가자가 수행한 답변영상에 기초하여 도출된 출력정보에 해당하므로, 상기 제1출력정보와 함께 후술하는 피평가자의 상기 특정 역량에 대한 종합평가정보를 도출하기 위한 요소로 사용될 수 있다.On the other hand, the second output information deriving step (S24) derives the second output information by inputting the image of the answer performed by the evaluator to the one or more in-depth questions into the machine learning model. The machine learning model input in the second output information deriving step S24 may be the same as the machine learning model in the first output information deriving step S21 described above. The configuration of the second output information derived in the step of deriving the second output information (S24) is the same as the configuration of the first output information derived in the step of deriving the first output information (S21), but the second output information is the second output information. Comprehensive evaluation information on the specific competency of the evaluator to be described later together with the first output information because it corresponds to the output information derived based on the answer image performed by the evaluator to one or more in-depth questions derived based on 1 output information can be used as a factor for deriving
상기 종합평가정보도출단계(S25)는 상기 제1출력정보 및 상기 제2출력정보에 기초하여 상기 피평가자의 상기 특정 역량에 대한 종합평가정보를 도출한다. 더 구체적으로, 상기 제1출력정보는 제1질문제공단계(S20)에서 피평가자가 평가 받고자 하는 특정 역량과 관련된 1 이상의 질문에 대한 피평가자가 수행한 답변영상에 기초하여 도출되고, 상기 제1출력정보에는 해당 답변영상에서 관찰할 수 있는 해당 특정 역량에 대한 도출행동지표에 대한 정보를 포함한다. 한편, 상기 제2출력정보는 해당 특정 역량에 대한 복수의 행동지표 가운데 상기 제1출력정보에 포함된 도출행동지표에 해당하지 않는 행동지표에 대한 답변을 해당 피평가자로부터 이끌어내기 위한 1 이상의 심층질문에 대하여 피평가자가 수행한 답변영상에 기초하여 도출되고, 상기 제2출력정보에는 해당 답변영상에서 관찰할 수 있는 해당 특정 역량에 대한 도출행동지표에 대한 정보를 포함한다. 따라서, 상기 제1출력정보 및 상기 제2출력정보에 포함되는 각각의 도출행동지표는 상기 특정 역량에 대한 복수의 행동지표 모두를 포함할 수 있고, 결과적으로 상기 제1출력정보 및 상기 제2출력정보에 기초하여 특정 역량에 대한 평가를 수행할 수 있다.The comprehensive evaluation information derivation step (S25) derives comprehensive evaluation information on the specific competency of the evaluated person based on the first output information and the second output information. More specifically, the first output information is derived based on an image of an answer performed by the evaluator to one or more questions related to the specific competency that the evaluator wants to be evaluated in the first question providing step (S20), and the first output information includes information on the derived behavioral indicators for the specific competency that can be observed in the corresponding response image. On the other hand, the second output information is one or more in-depth questions to derive an answer to a behavioral indicator that does not correspond to the derived behavioral indicator included in the first output information among a plurality of behavioral indicators for the specific competency. is derived based on the response image performed by the evaluated person, and the second output information includes information on the derived behavioral index for the specific capability that can be observed in the corresponding response image. Accordingly, each derived behavioral indicator included in the first output information and the second output information may include all of a plurality of behavioral indicators for the specific capability, and as a result, the first output information and the second output Based on the information, assessments of specific competencies can be performed.
본 발명의 다른 실시예에서는, 해당 특정 역량에 대한 복수의 행동지표 가운데 상기 제1출력정보에 포함된 도출행동지표 및 상기 제2출력정보에 포함된 도출행동지표에 해당하지 않는 행동지표가 존재하는 경우, 상기 심층질문설정단계(S22)를 다시 수행하여 상기 도출행동지표에 해당하지 않는 행동지표에 대한 추가적인 심층질문을 도출하며, 마찬가지로 상기 제2질문제공단계(S23) 및 상기 제2출력정보도출단계(S24)를 다시 수행하여 상기 도출행동지표에 해당하지 않는 행동지표에 대하여 피평가자가 수행한 답변영상에 대한 출력정보를 도출할 수 있고, 이와 같은 반복 과정은 각각의 출력정보에 포함되는 1 이상의 도출행동지표가 해당 특정 역량에 대한 복수의 행동지표를 모두 포함할 때까지 반복될 수 있다.In another embodiment of the present invention, among a plurality of behavior indicators for the specific capability, there are behavior indicators that do not correspond to the derived behavior indicators included in the first output information and the derived behavior indicators included in the second output information. In this case, the in-depth question setting step (S22) is performed again to derive an additional in-depth question about the behavioral indicator that does not correspond to the derived behavioral indicator, and similarly, the second question question providing step (S23) and the second output information derivation By performing step S24 again, it is possible to derive the output information for the response image performed by the evaluator with respect to the behavioral indicators that do not correspond to the derived behavioral indicators, and this iterative process includes one or more output information included in each output information. It can be repeated until the derived behavioral indicator includes all the multiple behavioral indicators for the specific competency.
본 발명의 또 다른 실시예에서는, 상기 제1출력정보도출단계(S21)에서 도출되는 제1출력정보에 포함된 1 이상의 도출행동지표가 상기 특정 역량에 대한 복수의 행동지표 모두를 포함하는 경우에, 심층질문과 관련된 단계들을 수행하지 않고 바로 상기 종합평가정보도출단계(S25)를 수행할 수 있으며, 이와 같은 경우, 상기 종합평가정보도출단계(S25)에서는 제1출력정보에 기초하여 종합평가정보를 도출할 수도 있다.In another embodiment of the present invention, when one or more derived behavior indicators included in the first output information derived in the first output information derivation step S21 include all of the plurality of behavior indicators for the specific capability , the comprehensive evaluation information derivation step (S25) can be performed immediately without performing the steps related to the in-depth question, and in this case, the comprehensive evaluation information derivation step (S25) based on the first output information can also be derived.
도 12는 본 발명의 일 실시예에 따른 심층질문설정단계(S22)의 세부 단계들을 개략적으로 도시한다.12 schematically shows detailed steps of the deep question setting step (S22) according to an embodiment of the present invention.
도 12에 도시된 바와 같이, 상기 심층질문설정단계(S22)는, 상기 특정 역량에 대하여 설정된 복수의 행동지표 및 상기 일반질문단계를 통해 도출된 1 이상의 상기 도출행동지표에 기초하여 상기 복수의 행동지표 가운데 상기 도출행동지표로 도출되지 않은 행동지표를 판별하고, 상기 피평가자로 하여금 상기 도출행동지표로 도출되지 않은 행동지표와 관련된 답변을 이끌어내기 위한 1 이상의 심층질문을 설정할 수 있다.12, the in-depth question setting step (S22) is based on a plurality of behavior indicators set for the specific competency and one or more derived behavior indicators derived through the general question step, the plurality of behaviors It is possible to determine one or more in-depth questions for determining a behavioral indicator not derived by the derived behavioral indicator among the indicators, and for eliciting an answer related to the behavioral indicator that is not derived by the derived behavioral indicator for the evaluated person.
구체적으로, 상기 심층질문설정단계(S22)는 평가하고자 하는 특정 역량에 상응하는 복수의 행동지표 가운데 상기 제1출력정보에 포함되는 1 이상의 도출행동지표에 포함되지 않는 행동지표를 판별하는 단계(S30)를 포함한다. 상기 단계(S30)를 통해 제1출력정보에 포함되지 않은 행동지표만을 판별함으로써 후술하는 단계(S31)에서 제1출력정보에 포함되지 않은 행동지표와 관련된 답변을 피평가자로부터 이끌어낼 수 있는 질문에 해당하는 심층질문을 도출할 수 있다.Specifically, the in-depth question setting step (S22) is a step of determining a behavioral indicator that is not included in one or more derived behavioral indicators included in the first output information among a plurality of behavioral indicators corresponding to the specific competency to be evaluated (S30) ) is included. By determining only the behavioral indicators not included in the first output information through the above step (S30), it corresponds to a question that can lead to an answer related to the behavioral indicators not included in the first output information in the step (S31) to be described later from the evaluator In-depth questions can be derived.
이어서, 상기 심층질문설정단계(S22)는 도출행동지표에 포함되지 않는 행동지표에 대한 1 이상의 심층질문을 설정하는 단계(S31)를 더 포함한다. 상기 단계(S31)에서는 도출행동지표로 도출되지 않은 행동지표를 피평가자의 답변에서 관찰할 수 있도록 해당 행동지표와 연관된 1 이상의 심층질문을 도출한다. 상기 1 이상의 심층질문은 상기 서버시스템(1000)에 기설정된 각각의 행동지표에 상응하는 1 이상의 질문 가운데, 도출행동지표로 도출되지 않은 행동지표에 상응하는 1 이상의 질문 혹은 상기 1 이상의 질문 가운데 특정 질문을 심층질문으로 도출할 수 있다.Subsequently, the in-depth question setting step (S22) further includes a step (S31) of setting one or more in-depth questions for behavior indicators not included in the derived behavior indicators. In the above step (S31), one or more in-depth questions related to the behavioral indicators are derived so that behavioral indicators not derived as derived behavioral indicators can be observed in the respondent's answer. The one or more in-depth questions are one or more questions corresponding to a behavioral indicator not derived as a derived behavioral indicator among one or more questions corresponding to each behavioral indicator set in the server system 1000, or a specific question among the one or more questions can be derived as an in-depth question.
도 12에 도시된 심층질문설정부(1520)에서 수행하는 심층질문설정단계(S22)는 소정의 단계들을 수행하여 서버시스템(1000)에 행동지표별로 저장되어 있는 질문 풀(pool)에서 도출행동지표에 해당하지 않는 행동지표에 상응하는 질문 풀에서 특정 질문을 심층질문으로 도출하는 방법에 해당하나, 본 발명의 다른 실시예에서는 기계학습된 심층질문추천모델을 사용하여 심층질문을 도출할 수 있다.In the deep question setting step (S22) performed by the deep question setting unit 1520 shown in FIG. 12, predetermined steps are performed to derive behavioral indicators from the question pool stored for each behavioral indicator in the server system 1000 This corresponds to a method of deriving a specific question as a deep question from a question pool corresponding to a behavioral indicator that does not correspond to , but in another embodiment of the present invention, a deep question can be derived using a machine-learned deep question recommendation model.
도 13은 본 발명의 일 실시예에 따른 다른 방법으로 구현되는 심층질문설정단계의 세부 단계들을 개략적으로 도시한다.13 schematically shows detailed steps of the deep question setting step implemented in another method according to an embodiment of the present invention.
도 13에 도시된 바와 같이, 상기 심층질문설정단계는, 상기 특정 역량에 대하여 설정된 복수의 행동지표 및 상기 일반질문단계를 통해 도출된 1 이상의 상기 도출행동지표에 기초하여 상기 복수의 행동지표 가운데 상기 도출행동지표로 도출되었으나, 기설정된 판별기준을 충족하지 못하는 행동지표를 미완성 행동지표로 판별하고, 상기 피평가자로 하여금 상기 미완성 행동지표와 관련된 답변을 이끌어내기 위한 1 이상의 심층질문을 설정할 수 있다.As shown in FIG. 13 , in the in-depth question setting step, based on a plurality of behavior indicators set for the specific competency and one or more derived behavior indicators derived through the general question step, among the plurality of behavior indicators, the It is possible to determine one or more in-depth questions to determine a behavioral indicator that has been derived as a derived behavioral indicator, but does not meet the preset discrimination criteria, as an incomplete behavioral indicator, and allow the evaluator to derive an answer related to the incomplete behavioral indicator.
상기 심층질문설정단계(S22)는 도 12에서 상술한 바와 같이, 도출행동지표로 도출되지 않은 행동지표를 판별하고, 피평가자로부터 해당 행동지표에 대한 답변을 이끌어내기 위한 1 이상의 심층질문을 설정할 수 있다.In the in-depth question setting step (S22), as described above in FIG. 12, one or more in-depth questions can be set to determine a behavioral indicator that is not derived as a derived behavioral indicator, and to derive an answer to the corresponding behavioral indicator from the person being evaluated. .
한편, 본 발명의 다른 실시예에서 상기 심층질문설정단계(S22)는 도 13에 도시된 바와 같이, 특정 역량에 대하여 설정된 복수의 행동지표 가운데 도출행동지표로 도출되었으나 완벽하게 도출되지 않은, 즉 미완성 행동지표를 판별하고, 피평가자로 하여금 상기 미완성 행동지표에 대한 답변을 이끌어내기 위한 1 이상의 심층질문을 설정할 수도 있다.On the other hand, in another embodiment of the present invention, the in-depth question setting step (S22) is, as shown in FIG. 13, derived as a derived behavioral indicator among a plurality of behavioral indicators set for a specific capability, but is not completely derived, that is, incomplete. One or more in-depth questions may be set for determining the behavioral indicators and for eliciting answers to the incomplete behavioral indicators from the evaluated.
구체적으로, 상기 심층질문설정단계(S22)는 평가하고자 하는 특정 역량에 상응하는 복수의 행동지표 가운데 상기 제1출력정보에 포함되는 1 이상의 도출행동지표에 포함되나, 기설정된 판별기준을 충족하지 못하는 행동지표를 판별(S40)한다. 예를 들어, 상기 기설정된 판별기준은 행동지표가 어느 정도로 도출행동지표에 포함될 수 있는 지 판별하는 기준값에 해당할 수 있다. 특정 역량에 상응하는 특정 행동지표가 도출행동지표에 포함되고, 상기 판별기준을 충족하는 경우에 상기 특정 행동지표는 도출행동지표로 온전히 도출된 것으로 판별할 수 있다. 반면에 상기 특정 행동지표가 도출행동지표에 포함되나, 상기 판별기준을 충족하지 못하는 경우에 상기 특정 행동지표를 완벽하지 않은 도출행동지표, 즉 미완성 행동지표로 판별한다.Specifically, the in-depth question setting step (S22) is included in one or more derived behavioral indicators included in the first output information among a plurality of behavioral indicators corresponding to the specific competency to be evaluated, but does not meet the preset discrimination criteria A behavioral indicator is determined (S40). For example, the preset discrimination criterion may correspond to a reference value for determining to what extent the behavioral indicator can be included in the derived behavioral indicator. When a specific behavioral indicator corresponding to a specific competency is included in the derived behavioral indicator and satisfies the discrimination criteria, it can be determined that the specific behavioral indicator is completely derived as the derived behavioral indicator. On the other hand, if the specific behavioral indicator is included in the derived behavioral indicator, but does not satisfy the criterion, the specific behavioral indicator is determined as an incomplete derived behavioral indicator, that is, an incomplete behavioral indicator.
이와 같이, 단계 S40를 통해 제1출력정보에 포함되나, 명확한 행동지표에 해당하지 않는 행동지표를 미완성 행동지표로 판별함으로써, 후술하는 단계 S41에서 미완성 행동지표와 관련된 답변을 피평가자로부터 이끌어낼 수 있는 질문에 해당하는 심층질문을 도출할 수 있다.In this way, by discriminating a behavioral indicator that is included in the first output information through step S40 but does not correspond to a clear behavioral indicator as an incomplete behavioral indicator, an answer related to the incomplete behavioral indicator can be derived from the subject in step S41 to be described later. In-depth questions that correspond to the questions can be derived.
상기 단계 S40에서는 미완성 행동지표를 피평가자의 답변에서 관찰할 수 있도록 해당 미완성 행동지표와 연관된 1 이상의 심층질문을 도출한다. 상기 1 이상의 심층질문은 상기 서버시스템(1000)에 기설정된 각각의 행동지표에 상응하는 1 이상의 질문 가운데, 미완성 행동지표에 상응하는 1 이상의 질문 혹은 상기 1 이상의 질문 가운데 특정 질문을 심층질문으로 도출할 수 있다.In step S40, one or more in-depth questions related to the incomplete behavioral indicator are derived so that the incomplete behavioral indicator can be observed in the respondent's answer. The one or more in-depth questions are one or more questions corresponding to incomplete behavior indicators among one or more questions corresponding to each behavioral indicator set in the server system 1000, or a specific question among the one or more questions to be derived as an in-depth question. can
도 13에 도시된 심층질문설정부(1520)에서 수행하는 심층질문설정단계(S22)는 소정의 단계들을 수행하여 서버시스템(1000)에 행동지표별로 저장되어 있는 질문 풀(pool)에서 미완성 행동지표에 상응하는 질문 풀에서 특정 질문을 심층질문으로 도출하는 방법에 해당하나, 본 발명의 다른 실시예에서는 기계학습된 심층질문추천모델을 사용하여 심층질문을 도출할 수 있다.In the deep question setting step (S22) performed by the deep question setting unit 1520 shown in FIG. 13, predetermined steps are performed and incomplete behavioral indicators are stored in the question pool stored for each behavioral indicator in the server system 1000. This corresponds to a method of deriving a specific question as a deep question from the corresponding question pool, but in another embodiment of the present invention, a deep question can be derived using a machine-learned deep question recommendation model.
이와 같이, 상기 심층질문설정단계(S22)는 도 12에서 설명한 도출행동지표로 도출되지 않은 행동지표에 대한 1 이상의 심층질문을 설정하는 방법 및 도 13에서 설명한 미완성 행동지표에 대한 1 이상의 심층질문을 설정하는 방법 가운데 어느 하나의 방법만을 사용할 수도 있다. 또한, 본 발명의 다른 실시예에서 상기 심층질문설정단계(S22)는 두 가지 방법을 모두 사용하여 도출행동지표로 도출되지 않은 행동지표 및 미완성 행동지표 각각에 대한 1 이상의 심층질문을 설정할 수도 있다.As such, the in-depth question setting step (S22) is a method of setting one or more in-depth questions for a behavioral indicator that is not derived by the derived behavioral indicator described in FIG. 12 and one or more in-depth questions about the incomplete behavioral indicator described in FIG. You can use only one of the setting methods. In addition, in another embodiment of the present invention, in the deep question setting step (S22), one or more in-depth questions for each of the behavioral indicators and incomplete behavior indicators that are not derived as the derived behavior indicators by using both methods may be set.
도 14는 본 발명의 일 실시예에 따른 역량정보도출부(1500)에서 기계학습모델에 의해 출력정보를 도출하는 과정을 개략적으로 도시한다.14 schematically illustrates a process of deriving output information by a machine learning model in the capability information derivation unit 1500 according to an embodiment of the present invention.
도 14에 도시된 바와 같이, 상기 역량정보도출부(1500)는 기계학습모델에 피평가자의 답변영상을 입력하여 출력정보를 도출하며, 구체적으로 상기 역량정보도출부(1500)에 포함되는 제1출력정보도출부(1512)는 제1질문제공부(1511)에서 제공하는 1 이상의 질문에 대하여 피평가자가 수행한 답변영상을 기계학습모델에 입력하여 상기 제1출력정보를 도출할 수 있고, 상기 역량정보도출부(1500)에 포함되는 제2출력정보도출부(1542)는 제2질문제공부(1541)에서 제공하는 1 이상의 심층질문에 대하여 피평가자가 수행한 답변영상을 기계학습모델에 입력하여 상기 제2출력정보를 도출할 수 있다.As shown in FIG. 14 , the capability information derivation unit 1500 derives output information by inputting an image of the subject's answer to the machine learning model, and specifically, the first output included in the capability information derivation unit 1500 . The information derivation unit 1512 may derive the first output information by inputting an image of an answer performed by the evaluated person to one or more questions provided by the first question-and-question study 1511 into the machine learning model, and the competency information The second output information derivation unit 1542 included in the derivation unit 1500 inputs the image of the answer performed by the evaluator to the one or more in-depth questions provided by the second question study unit 1541 into the machine learning model, and 2 Output information can be derived.
구체적으로, 상기 기계학습모델은 피평가자가 수행한 답변영상에 대한 평가를 수행하는 다양한 세부 기계학습모델을 포함할 수 있다. 상기 세부 기계학습모델은 딥러닝 기반으로 학습되어 평가를 수행할 수 있는 세부 기계학습모델에 해당하거나, 혹은 학습이 아닌 기설정된 루틴 혹은 알고리즘에 따라 해당 답변영상에 대한 특징정보를 도출하고, 도출된 특징정보에 대한 평가를 수행하는 세부 기계학습모델에 해당할 수 있다.Specifically, the machine learning model may include various detailed machine learning models that perform evaluation on the answer image performed by the evaluator. The detailed machine learning model corresponds to a detailed machine learning model that can be learned and evaluated based on deep learning, or derives feature information about the corresponding answer image according to a preset routine or algorithm rather than learning, It may correspond to a detailed machine learning model that evaluates feature information.
본 발명의 일 실시예에서 상기 역량정보도출부(1500)는 기본적으로 복수의 연속된 영상(이미지)정보 및 음성정보를 포함하는 피평가자가 수행한 답변영상을 입력받고, 딥러닝과 같은 기계학습 기술을 통해 학습된 기계학습모델을 통해 출력정보를 도출한다. 또한, 추가적으로 상기 역량정보도출부(1500)는 기계학습이 아닌 기설정된 규칙을 기반으로 답변영상을 분석하고, 특정 평가값들을 도출할 수도 있다. 상기 역량정보도출부(1500)는 복수의 연속된 영상(이미지) 및 음성을 포함하는 답변영상으로부터 영상 및 음성정보를 추출하여 이를 각각의 세부 기계학습모델에 입력하여 결과값을 도출하거나 혹은 영상 및 음성정보를 종합하여 세부 기계학습모델에 입력하여 결과값을 도출할 수도 있다.In an embodiment of the present invention, the capability information derivation unit 1500 basically receives an answer image performed by the subject including a plurality of continuous image (image) information and voice information, and machine learning technology such as deep learning The output information is derived through the machine learning model learned through In addition, the capability information derivation unit 1500 may additionally analyze the answer image based on a preset rule rather than machine learning, and may derive specific evaluation values. The capability information derivation unit 1500 extracts video and audio information from an answer image including a plurality of consecutive images (images) and audio and inputs them to each detailed machine learning model to derive a result value or It is also possible to synthesize the voice information and input it into the detailed machine learning model to derive the result value.
한편, 상기 역량정보도출부(1500)는 상기 기계학습모델을 포함하여, 상기 답변영상에서 도출된 특징정보에 기초하여 출력정보를 도출하거나, 혹은 상기 역량정보도출부(1500)는 별도로 구비된 기계학습모델을 호출하여 상기 답변영상에서 도출된 특징정보에 기초하여 출력정보를 도출할 수도 있다.On the other hand, the capability information derivation unit 1500 includes the machine learning model, and derives output information based on the feature information derived from the answer image, or the capability information derivation unit 1500 is a separately provided machine It is also possible to derive output information based on the feature information derived from the answer image by calling the learning model.
상기 역량정보도출부(1500)에서 피평가자가 수행한 답변영상을 기계학습모델에 입력하여, 기계학습모델을 통해 출력정보를 도출하는 세부적인 구성에 대해서는 도 16에서 상세하게 설명하도록 한다.The detailed configuration of inputting the response image performed by the evaluated person in the capability information derivation unit 1500 to the machine learning model and deriving the output information through the machine learning model will be described in detail with reference to FIG. 16 .
도 15는 본 발명의 일 실시예에 따른 심층질문설정단계(S22)에서 심층질문추천모델에 의해 심층질문을 도출하는 과정을 개략적으로 도시한다.15 schematically illustrates a process of deriving an in-depth question by a deep-question recommendation model in the deep-question setting step (S22) according to an embodiment of the present invention.
도 15에 도시된 바와 같이, 상기 심층질문설정단계(S22)는, 상기 제1출력정보도출단계(S21)에서 도출된 제1출력정보를 기계학습 기반의 심층질문추천모델에 입력하여 상기 피평가자로 하여금 상기 도출행동지표로 도출되지 않은 행동지표와 관련된 답변을 이끌어내기 위한 1 이상의 심층질문을 도출할 수 있다.15, in the deep question setting step (S22), the first output information derived in the first output information derivation step (S21) is input to the machine learning-based deep question recommendation model as the subject to be evaluated It is possible to derive one or more in-depth questions to elicit answers related to behavioral indicators that are not derived from the derived behavioral indicators.
구체적으로, 상기 심층질문설정부(1520)는 상술한 도 12에서와 같이 기설정된 소정의 단계들을 수행하여 심층질문을 도출하거나 혹은 도 15에 도시된 바와 같이 심층질문추천모델에 상기 제1출력정보를 입력하여 심층질문을 도출할 수도 있다. 상기 심층질문추천모델은 상술한 제1피평가자역량정보에 포함되는 심층질문정보, 더 구체적으로는 상기 심층질문정보는 상기 평가인터페이스에 포함된 심층질문레이어 상에서 평가자가 입력한 심층질문에 해당하며, 상기 심층질문추천모델은 심층질문정보를 기반으로 학습을 수행할 수 있다.Specifically, the deep question setting unit 1520 derives an in-depth question by performing predetermined steps as in FIG. 12 described above, or the first output information in the deep question recommendation model as shown in FIG. 15 . It is also possible to derive an in-depth question by entering The in-depth question recommendation model includes in-depth question information included in the above-described first evaluator competency information, and more specifically, the in-depth question information corresponds to an in-depth question input by the evaluator on the deep question layer included in the evaluation interface, and the The deep question recommendation model can perform learning based on deep question information.
한편, 상기 심층질문추천모델은 상기 심층질문정보만을 학습할 수도 있으나, 바람직하게는 상기 평가인터페이스 상에서 평가자가 입력한 심층질문에 관련된 행동지표를 추가로 학습하여, 관찰되지 않은 행동지표 및 이에 따른 심층질문과의 관계를 학습할 수도 있다.On the other hand, the deep question recommendation model may learn only the deep question information, but preferably by additionally learning the behavioral indicators related to the deep questions input by the evaluator on the evaluation interface, the unobserved behavioral indicators and the corresponding depth You can also learn to relate to questions.
또한, 상기 심층질문추천모델은 피평가자의 답변영상을 기반으로 심층질문을 도출하는 다양한 세부 기계학습모델을 포함할 수 있으며, 상기 세부 기계학습모델은 딥러닝 기반으로 학습되어 심층질문을 도출할 수 있는 세부 기계학습모델에 해당하거나, 혹은 학습이 아닌 기설정된 루틴 혹은 알고리즘에 따라 특징정보를 도출하고, 도출된 특징정보를 기반으로 심층질문을 도출하는 세부 기계학습모델에 해당할 수 있다.In addition, the deep question recommendation model may include various detailed machine learning models for deriving deep questions based on the answer image of the evaluated, and the detailed machine learning model is learned based on deep learning to derive deep questions. It may correspond to a detailed machine learning model, or it may correspond to a detailed machine learning model in which feature information is derived according to a preset routine or algorithm rather than learning, and a deep question is derived based on the derived feature information.
한편, 상기 심층질문설정부(1520)는 상기 심층질문추천모델을 포함하여, 상기 제1출력정보에 기초하여 1 이상의 심층질문을 도출하거나, 혹은 상기 심층질문설정부(1520)는 별도로 구비된 심층질문추천모델을 호출하여 상기 제1출력정보에 기초하여 1 이상의 심층질문을 도출할 수도 있다.On the other hand, the in-depth question setting unit 1520 includes the in-depth question recommendation model, and derives one or more in-depth questions based on the first output information, or the deep question setting unit 1520 is provided separately One or more in-depth questions may be derived based on the first output information by calling the question recommendation model.
또한, 도 15에 도시된 심층질문추천모델은 도 14에 도시된 기계학습모델과 구분되는 별도의 모델로 도시되어 있으나, 본 발명의 다른 실시예에서는 상기 심층질문추천모델은 상기 기계학습모델에 포함되고, 상기 기계학습모델에 포함된 세부 기계학습모델에 의해 도출된 제1출력정보를 입력 받아 1 이상의 심층질문을 도출할 수도 있다.In addition, the deep question recommendation model shown in FIG. 15 is shown as a separate model distinct from the machine learning model shown in FIG. 14, but in another embodiment of the present invention, the deep question recommendation model is included in the machine learning model and may derive one or more in-depth questions by receiving the first output information derived by the detailed machine learning model included in the machine learning model.
도 16은 본 발명의 일 실시예에 따른 피평가자가 수행한 답변영상을 기계학습모델에 입력하여, 기계학습모델에서 출력정보를 도출하는 구성을 개략적으로 도시한다.16 schematically shows a configuration for deriving output information from a machine learning model by inputting an image of an answer performed by an evaluator to a machine learning model according to an embodiment of the present invention.
도 16에 도시된 바와 같이, 상기 역량정보도출부(1500)에서 수행하는 제1출력정보도출단계(S21) 및 제2출력정보도출단계(S24)는 피평가자가 수행한 답변영상을 입력 받아 소정의 단계를 수행하여 상기 답변영상을 처리하고, 처리된 답변영상을 상기 기계학습모델에 입력하여 출력정보를 도출할 수 있다. 한편, 도 16의 (A), (B) 및 (C)에 도시된 도면들은 상기 역량정보도출부(1500)에서 기계학습모델에 입력하는 입력요소들의 구성에 대한 다양한 실시예에 해당한다.As shown in Fig. 16, the first output information derivation step (S21) and the second output information derivation step (S24) performed by the capability information derivation unit 1500 receive the answer image performed by the evaluator as input and select a predetermined The step may be performed to process the answer image, and the processed answer image may be input to the machine learning model to derive output information. Meanwhile, the drawings shown in (A), (B) and (C) of FIG. 16 correspond to various embodiments of the configuration of input elements input to the machine learning model by the capability information derivation unit 1500 .
구체적으로, 도 16의 (A) 내지 (C)에 도시된 바와 같이, 상기 제1출력정보도출단계(S21) 및 상기 제2출력정보도출단계(S24)는, 상기 피평가자가 수행한 답변영상에서 영상정보 및 음성정보를 분리하고, 분리된 영상정보 및 음성정보 각각을 전처리하여 상기 기계학습모델에 입력할 수 있다.Specifically, as shown in (A) to (C) of Fig. 16, the first output information deriving step (S21) and the second output information deriving step (S24) are performed in the answer image performed by the evaluator. Image information and audio information may be separated, and each of the separated image information and audio information may be pre-processed and input to the machine learning model.
상기 제1출력정보도출단계(S21)는 상기 제1질문제공부(1511)에서 피평가자에게 제공하는 1 이상의 질문에 대하여 피평가자가 수행한 답변영상을 입력 받아, 해당 답변영상에서 영상정보 및 음성정보를 분리한다. 마찬가지로, 상기 제2출력정보도출단계(S24)는 상기 제2질문제공부(1541)에서 피평가자에게 제공하는 1 이상의 심층질문에 대하여 피평가자가 수행한 답변영상을 입력 받아, 해당 답변영상에서 영상정보 및 음성정보를 분리한다. 더 구체적으로 상기 역량정보도출부(1500)는 영상음성분리모듈을 포함하고, 상기 영상음성분리모듈은 제1출력정보도출단계(S21) 및 제2출력정보도출단계(S24)에서 입력 받는 각각의 답변영상들을 영상정보 및 음성정보로 분리한다.The first output information derivation step (S21) is to receive an image of an answer performed by the assessee to one or more questions provided to the assessee in the first question and question study 1511, and obtain image information and audio information from the answer image. separate Similarly, in the second output information derivation step (S24), the answer image performed by the assessee to one or more in-depth questions provided to the assessee in the second question study 1541 is input, and image information and Separate voice information. More specifically, the capability information derivation unit 1500 includes an image and sound separation module, and the image and sound separation module receives input in the first output information derivation step (S21) and the second output information derivation step (S24). The answer images are divided into video information and audio information.
한편, 상기 영상음성분리모듈에 의해 분리된 영상정보 및 음성정보는 각각 개별적으로 전처리되어 상기 기계학습모델에 입력된다. 더 구체적으로 상기 역량정보도출부(1500)는 전처리모듈을 더 포함하고, 상기 전처리모듈은 상기 영상정보 및 상기 음성정보 각각을 전처리한다. 이와 같이, 상기 전처리모듈을 통해 상기 기계학습모델의 알고리즘에 알맞은 형태로 상기 영상정보 및 상기 음성정보가 변환되며 상기 기계학습모델의 성능을 개선시킬 수 있다.Meanwhile, the image information and the audio information separated by the image and sound separation module are individually pre-processed and input to the machine learning model. More specifically, the capability information derivation unit 1500 further includes a pre-processing module, and the pre-processing module pre-processes each of the image information and the audio information. In this way, the image information and the audio information are converted into a form suitable for the algorithm of the machine learning model through the pre-processing module, and the performance of the machine learning model can be improved.
이를 위하여 상기 전처리모듈에서는 영상정보 및 음성정보에 대하여 Data Cleaning 단계를 통해 missing value 혹은 feature를 처리하고, Handling Text and Categorical Attributes 단계를 통해 one hot encoding 방식 등을 통해 숫자형 데이터로 인코딩하고, Custom Transformers 단계를 통해 데이터를 변환하고, Feature Scaling 단계를 통해 데이터의 범위를 설정하고, Transformation Pipelines 단계를 통해 이러한 과정을 자동화 할 수 있으며, 상기 전처리모듈에서 수행하는 단계들은 상술한 단계들에 한정되지 않고, 기계학습모델을 위한 다양한 전처리 단계들을 포함할 수 있다.To this end, the preprocessing module processes missing values or features through the data cleaning step for image information and audio information, and encodes them into numeric data through one hot encoding method through the Handling Text and Categorical Attributes step, and Custom Transformers Transform data through the steps, set the range of data through the Feature Scaling step, and automate this process through the Transformation Pipelines step. The steps performed in the preprocessing module are not limited to the steps described above, It can include various preprocessing steps for the machine learning model.
한편, 상기 제1출력정보도출단계(S21) 및 상기 제2출력정보도출단계(S24)는, 상기 피평가자가 수행한 답변영상에 기초하여 텍스트정보를 도출하는 단계; 상기 도출된 텍스트정보를 벡터로 표현하는 임베딩을 수행하는 단계; 및 상기 임베딩된 벡터를 상기 기계학습모델에 입력하는 단계;를 포함할 수 있다.On the other hand, the first output information deriving step (S21) and the second output information deriving step (S24), the step of deriving text information based on the answer image performed by the evaluator; performing embedding expressing the derived text information as a vector; and inputting the embedded vector into the machine learning model.
구체적으로, 상기 역량정보도출부(1500)는 STT모듈을 더 포함하고, 상기 STT모듈은 제1출력정보도출단계(S21) 및 제2출력정보도출단계(S24)에서 입력 받은 답변영상에 대하여 Speech to Text(STT) 변환을 수행하여 각각의 답변영상의 음성에 대한 텍스트정보를 도출할 수 있다. 상기 STT모듈에서 수행하는 Speech to Text 변환 방법은 기존에 존재하는 다양한 STT 변환 방법을 사용할 수 있다. 한편, 텍스트정보는 상술한 STT모듈을 통한 STT변환 방법에 의해서만 도출되지 않을 수 있고, 상기 답변영상에 대하여 서버시스템(1000)의 관리자 등에 의해 답변영상에 대한 텍스트를 직접 입력받거나, 혹은 상기 STT모듈을 통해 1차적으로 답변영상에 대한 텍스트정보를 도출하고, 해당 텍스트정보를 서버시스템(1000)의 관리자 등이 보정함으로써 최종적인 텍스트정보가 도출될 수도 있다.Specifically, the capability information derivation unit 1500 further includes an STT module, and the STT module Speech for the answer image received in the first output information derivation step (S21) and the second output information derivation step (S24) By performing to Text (STT) conversion, it is possible to derive text information about the voice of each answer image. The Speech to Text conversion method performed by the STT module may use various existing STT conversion methods. On the other hand, text information may not be derived only by the STT conversion method through the above-described STT module, and the text for the answer image is directly input by the manager of the server system 1000 for the answer image, or the STT module The final text information may be derived by first deriving text information on the answer image through , and correcting the text information by the manager of the server system 1000 or the like.
한편, 본 발명의 다른 실시예에서 상기 STT모듈은 상기 영상음성분리모듈을 통해 분리된 답변영상의 음성정보를 입력 받아 STT 변환을 수행하여 해당 음성정보를 텍스트정보로 변환할 수도 있다.Meanwhile, in another embodiment of the present invention, the STT module may receive the audio information of the answer image separated through the video-to-sound separation module, perform STT conversion, and convert the corresponding audio information into text information.
이어서, 상기 제1출력정보도출단계(S21) 및 상기 제2출력정보도출단계(S24)는 도출한 텍스트정보를 벡터로 표현하는 임베딩을 수행하는 단계를 포함한다. 더 구체적으로, 상기 역량정보도출부(1500)는 임베딩모듈을 더 포함하고, 상기 임베딩모듈에서 상기 답변영상에 기초하여 도출된 텍스트정보에 대하여 임베딩을 수행할 수 있다.Subsequently, the step of deriving the first output information (S21) and the step of deriving the second output information (S24) include performing embedding expressing the derived text information as a vector. More specifically, the capability information derivation unit 1500 may further include an embedding module, and the embedding module may perform embedding on text information derived based on the answer image.
이와 같이, 상기 답변영상에 기초하여 도출된 텍스트정보만을 임베딩하고, 텍스트정보에 대한 임베딩된 벡터, 전처리된 영상정보 및 전처리된 음성정보를 기계학습모델에 입력하여 출력정보를 도출하는 구성을 도시한 도면이 도 16의 (A)에 해당한다.In this way, only text information derived based on the answer image is embedded, and the embedded vector for text information, pre-processed image information, and pre-processed voice information are input to the machine learning model to derive output information. The drawing corresponds to FIG. 16A.
한편, 본 발명의 다른 실시예에서는 도 16의 (B)에 도시된 바와 같이, 상기 역량정보도출부(1500)는 답변영상에 기초하여 도출한 텍스트정보 및 상기 피평가자가 수행한 답변영상에 대한 질문의 텍스트정보를 벡터로 표현하는 임베딩을 수행하는 단계를 수행할 수 있고, 해당 질문에 대한 임베딩된 벡터는 기계학습모델에 입력되는 추가적인 구성에 해당할 수 있다. 따라서 해당 기계학습모델은 답변영상뿐만 아니라 답변영상에 대한 질문을 더 고려하여 더욱 정교한 출력정보를 도출할 수 있다.On the other hand, in another embodiment of the present invention, as shown in (B) of Figure 16, the capability information derivation unit 1500 is a question about the text information derived based on the answer image and the answer image performed by the evaluator. A step of performing embedding expressing the text information of the vector may be performed, and the embedded vector for the corresponding question may correspond to an additional configuration input to the machine learning model. Therefore, the machine learning model can derive more sophisticated output information by considering not only the answer image but also the question about the answer image.
상기 임베딩모듈에서는 One-hot encoding, CountVectorizer, TfidVectorizer 및 Word2Vec 등 다양한 임베딩 방법을 사용하여 각각의 텍스트정보를 벡터 형태로 표현할 수 있다. 상기 제1출력정보도출단계(S21) 및 상기 제2출력정보도출단계(S24)에서는 이와 같이 임베딩된 벡터들을 상기 기계학습모델에 입력하고, 상기 기계학습모델은 상술한 전처리된 영상정보 및 음성정보와 임베딩된 벡터들을 입력받아 피평가자가 수행한 답변영상에 대한 출력정보를 도출할 수 있다.The embedding module may express each text information in a vector form using various embedding methods such as One-hot encoding, CountVectorizer, TfidVectorizer, and Word2Vec. In the first output information deriving step (S21) and the second output information deriving step (S24), the vectors embedded in this way are input to the machine learning model, and the machine learning model is the preprocessed image information and audio information described above. and embedded vectors can be input and output information about the answer image performed by the evaluated can be derived.
이와 같이, 상기 기계학습모델에는 답변영상에 기초하여 도출된 텍스트정보 혹은 상기 답변영상에 기초하여 도출된 텍스트정보 및 상기 피평가자가 수행한 답변영상에 대한 질문의 텍스트정보 각각을 임베딩한 벡터만을 입력하여 출력정보를 도출할 수도 있으나, 바람직하게는 영상정보 및 음성정보 각각을 상기 기계학습모델에 추가적으로 입력하여, 텍스트만으로는 파악하기 어려운 피평가자의 답변의 맥락 및 의도를 파악할 수 있고, 따라서 더욱 정확한 출력정보를 도출할 수 있다.In this way, only text information derived based on the answer image or text information derived based on the answer image and text information of the question for the answer image performed by the evaluated are embedded in the machine learning model. It is also possible to derive output information, but preferably, by additionally inputting each of the image information and the audio information to the machine learning model, the context and intention of the respondent's answer, which is difficult to grasp with text alone, can be grasped, and thus more accurate output information can be obtained. can be derived
한편, 본 발명의 또 다른 실시예에서는 도 16의 (C)에서 도시하는 바와 같이, 상기 역량정보도출부(1500)는 기계학습모델에 전처리된 영상정보, 전처리된 음성정보, 상기 기계학습모델에 입력하고자 하는 피평가자가 수행한 답변영상에 기초하여 도출된 텍스트정보 및 해당 답변영상에 대한 역량식별자를 입력하여 해당 답변영상에 대한 출력정보를 도출할 수 있다.On the other hand, in another embodiment of the present invention, as shown in (C) of Figure 16, the capability information derivation unit 1500 is pre-processed image information in the machine learning model, pre-processed audio information, the machine learning model By inputting text information derived based on the answer image performed by the subject to be input and the competency identifier for the corresponding answer image, output information for the corresponding answer image can be derived.
도 16의 (C)에 도시된 기계학습모델은 특정 역량에 대한 평가를 수행하는 것이 아니라 복수의 역량에 대한 평가를 수행할 수 있는 기계학습 기반의 모델에 해당할 수 있고, 이와 같은 경우 상기 기계학습모델에 상기 역량식별자를 입력하여 상기 역량식별자에 상응하는 특정 역량에 대한 평가를 수행할 수 있다. 즉, 상기 기계학습모델은 복수의 역량 각각에 대한 평가를 수행할 수 있고, 해당 기계학습모델에 피평가자가 수행한 답변영상 및 해당 답변영상을 통해 평가하고자 하는 특정 역량을 식별할 수 있는 역량식별자를 입력하여 출력정보를 도출할 수 있다.The machine learning model shown in (C) of FIG. 16 may correspond to a machine learning-based model capable of performing evaluation of a plurality of competencies rather than performing evaluation of a specific competency. In this case, the machine learning model By inputting the competency identifier to the learning model, it is possible to evaluate a specific competency corresponding to the competency identifier. That is, the machine learning model can evaluate each of a plurality of competencies, and a competency identifier capable of identifying a specific competency to be evaluated through the answer image performed by the evaluated person and the corresponding answer image in the machine learning model Output information can be derived by input.
또한, 도 16에 도시되지는 않았으나, 도 16의 (B) 및 (C)의 구성을 결합하여 기계학습모델에 피평가자가 수행한 답변영상에 대한 전처리된 영상정보, 전처리된 음성정보, 해당 답변영상에 기초하여 도출된 텍스트정보에 대하여 임베딩된 벡터, 해당 답변영상에 대한 질문의 텍스트정보에 대하여 임베딩된 벡터 및 해당 답변영상에 상응하는 역량식별자를 상기 기계학습모델에 입력하여 출력정보를 도출할 수도 있다.In addition, although not shown in FIG. 16, pre-processed image information, pre-processed audio information, and corresponding answer image for the answer image performed by the evaluator in the machine learning model by combining the configurations of (B) and (C) of FIG. 16 Output information can also be derived by inputting a vector embedded with respect to text information derived based on , a vector embedded with respect to text information of a question for the corresponding answer image, and a competency identifier corresponding to the corresponding answer image into the machine learning model. have.
도 17은 본 발명의 일 실시예에 따른 피평가자가 수행한 답변영상을 기계학습모델에 입력하여 도출된 출력정보에 따라 심층질문을 설정하고, 이에 따라 종합평가정보를 도출하는 구성을 개략적으로 도시한다.17 schematically shows a configuration for setting an in-depth question according to output information derived by inputting an image of an answer performed by a subject to be evaluated into a machine learning model, and deriving comprehensive evaluation information according to an embodiment of the present invention. .
도 17에 도시된 바와 같이, 제1질문제공부(1511)는 피평가자에게 특정 역량과 관련된 1 이상의 질문을 제공하고, 이에 따라 제1출력정보도출부(1512)는 도 16에서 도시된 바와 같이 상기 1 이상의 질문에 대하여 피평가자가 수행한 답변영상을 처리하고, 처리된 답변영상을 기계학습모델에 입력하여 제1출력정보를 도출할 수 있다.As shown in FIG. 17 , the first question question study 1511 provides one or more questions related to a specific competency to the evaluated, and accordingly, the first output information derivation unit 1512 provides the The first output information may be derived by processing the image of an answer performed by the evaluator for one or more questions, and inputting the processed image of the answer to the machine learning model.
한편, 상기 제1질문제공부(1511)에서 제공하는 1 이상의 질문은 각각의 질문 사이의 관계를 고려하지 않은 개별 질문의 내용들이 독립적인 형태로 구성될 수 있으나, 바람직하게는 1 이상의 질문들은 상호간 연관성이 존재할 수 있다. 예를 들어, 1 이상의 질문 중 첫 번째 질문은 특정 역량과 관련된 피평가자의 과거 경험의 '상황'을 물어보는 질문에 해당하고, 두 번째 질문은 첫 번째 질문과 연계하여 첫 번째 질문의 상황에서 어떤 '행동'을 했는지를 물어보는 질문에 해당하고, 세 번째 질문은 첫 번째 질문 및 두 번째 질문과 연계하여 두 번째 질문에서의 행동의 '결과'를 물어보는 질문에 해당하여 각각의 질문이 상호 연계되도록 구성될 수 있다.On the other hand, the one or more questions provided by the first question study 1511 may consist of individual questions that do not consider the relationship between the questions in an independent form, but preferably, the one or more questions are mutually A correlation may exist. For example, among 1 or more questions, the first question corresponds to a question asking about the 'situation' of the evaluator's past experiences related to a particular competency, and the second question is related to the first question in what 'in the context of the first question'. The third question corresponds to a question asking whether an action was taken, and the third question corresponds to a question asking the 'result' of the action in the second question in connection with the first and second questions, so that each question is interconnected. can be configured.
또한, 도 17에서 상기 제1질문제공부(1511)는 각각의 질문을 개별적으로 제공하고, 제1출력정보도출부(1512)는 개별적으로 제공된 질문 별로 피평가자가 수행한 답변영상 각각에 대하여 제1출력정보를 도출하였으나, 본 발명의 다른 실시예에서는 상기 제1질문제공부(1511)는 1 이상의 질문을 피평가자에게 한번에 제공하고, 상기 제1출력정보도출부(1512)는 1 이상의 질문에 대하여 피평가자가 수행한 답변영상을 각각의 질문 별로 구분하여 상기 기계학습모델에 입력하거나 혹은 답변영상 전체를 상기 기계학습모델에 입력하여 제1출력정보를 도출할 수도 있다.In addition, in FIG. 17 , the first question-question study unit 1511 provides each question individually, and the first output information derivation unit 1512 provides the first output information for each answer image performed by the subject for each individually provided question. Although the output information was derived, in another embodiment of the present invention, the first question-and-question study 1511 provides one or more questions to the evaluator at a time, and the first output information derivation unit 1512 responds to one or more questions. The first output information may be derived by dividing the answer image performed by , for each question and inputting it to the machine learning model, or inputting the entire answer image to the machine learning model.
한편, 상기 제1출력정보도출부(1512)는 답변영상에서 관찰되는 특정 역량에 대한 행동지표를 도출행동지표로 하여 이를 포함하는 제1출력정보를 도출하고, 상기 심층질문설정부(1520)는 상기 심층질문설정단계(S22)를 수행하여 상기 제1출력정보에 기초하여 1 이상의 심층질문을 도출한다.On the other hand, the first output information derivation unit 1512 derives the first output information including the behavior index for a specific capability observed in the answer image as the derived behavior index, and the in-depth question setting unit 1520 By performing the in-depth question setting step (S22), one or more in-depth questions are derived based on the first output information.
도 17에서는 특정 역량에 관한 행동지표는 행동지표 1 내지 5에 해당하고, 첫 번째 질문에 대하여 피평가자가 수행한 답변영상에 기초하여 도출된 제1출력정보에 포함되는 도출행동지표는 행동지표 1을 포함하고, 두 번째 질문에 대하여 피평가자가 수행한 답변영상에 기초하여 도출된 제1출력정보에 포함되는 도출행동지표는 행동지표 3 및 4를 포함하고, 세 번째 질문에 대하여 피평가자가 수행한 답변영상에 기초하여 도출된 제1출력정보에 포함되는 도출행동지표는 행동지표 4를 포함한다.In FIG. 17, the behavioral indicators related to specific capabilities correspond to behavioral indicators 1 to 5, and the derived behavioral indicators included in the first output information derived based on the answer image performed by the evaluator to the first question are behavioral indicators 1 and the derived behavioral indicators included in the first output information derived based on the response image performed by the evaluator to the second question include behavioral indicators 3 and 4, and the answer image performed by the evaluator to the third question The derived behavioral indicator included in the first output information derived based on , includes behavioral indicator 4.
한편, 심층질문설정부(1520)는 각각의 제1출력정보에 포함된 도출행동지표에 해당하지 않는 행동지표에 관련된 심층질문을 도출한다. 예를 들어 도 17에서는 각각의 제1출력정보에는 행동지표 2 및 5에 해당하는 도출행동지표를 포함하지 않고, 따라서 상기 심층질문설정부(1520)는 행동지표 2 및 5와 관련된 답변을 해당 피평가자로부터 이끌어낼 수 있는 1 이상의 심층질문을 도출한다.On the other hand, the in-depth question setting unit 1520 derives an in-depth question related to a behavioral indicator that does not correspond to the derived behavioral indicator included in each of the first output information. For example, in FIG. 17 , each of the first output information does not include the derived behavior indicators corresponding to the behavior indicators 2 and 5, and thus the in-depth question setting unit 1520 provides answers related to the behavior indicators 2 and 5 to the subject to be evaluated. Derive one or more in-depth questions that can be drawn from
상기 심층질문설정부(1520)는 도출행동지표에 해당하지 않는 행동지표와 관련된 기설정된 질문 가운데 1 이상을 심층질문으로 도출하거나 혹은 기계학습된 심층질문추천모델을 통해 별도의 심층질문을 도출할 수도 있다.The deep question setting unit 1520 may derive one or more of the preset questions related to the behavioral indicators that do not correspond to the derived behavioral indicators as in-depth questions, or derive a separate deep question through the machine-learned deep question recommendation model. have.
이와 같이, 심층질문설정부(1520)에서 1 이상의 심층질문이 도출되면 상기 제2질문제공부(1541)는 피평가자에게 상기 1 이상의 심층질문을 제공하고, 이에 따라 제2출력정보도출부(1542)는 도 16에서 도시된 바와 같이 상기 1 이상의 심층질문에 대하여 피평가자가 수행한 답변영상을 처리하고, 처리된 답변영상을 기계학습모델에 입력하여 제2출력정보를 도출할 수 있다. 마찬가지로 상기 제2출력정보도출부(1542)는 답변영상에서 관찰되는 특정 역량에 대한 행동지표를 도출행동지표로 하여 이를 포함하는 제2출력정보를 도출할 수 있다.As such, when one or more in-depth questions are derived from the in-depth question setting unit 1520, the second question study unit 1541 provides the one or more in-depth questions to the evaluated, and accordingly, the second output information derivation unit 1542 As shown in FIG. 16 , may process the image of the answer performed by the evaluator to the one or more in-depth questions, and input the processed image of the answer to the machine learning model to derive second output information. Similarly, the second output information derivation unit 1542 may derive the second output information including the behavioral indicator for a specific capability observed in the answer image as the derived behavioral indicator.
한편, 상기 제1출력정보도출단계(S21)에서 도출하는 제1출력정보 및 상기 제2출력정보도출단계(S24)에서 도출하는 제2출력정보는, 상기 평가정보와 관련된 도출행동지표에 대한 발견확률정보 및 상기 발견확률정보에 상응하는 상기 피평가자가 수행한 답변영상의 텍스트정보를 더 포함할 수 있다.On the other hand, the first output information derived in the first output information derivation step (S21) and the second output information derived in the second output information derivation step (S24) are found for the derived behavioral indicators related to the evaluation information The text information of the answer image performed by the subject corresponding to the probability information and the discovery probability information may be further included.
구체적으로, 상기 기계학습모델을 통해 도출된 제1출력정보 및 제2출력정보는 피평가자가 수행한 답변영상에 상응하는 1 이상의 도출행동지표 각각에 대하여 상기 답변영상에 관련 답변내용이 포함되어 있는 지에 대한 발견확률정보를 포함할 수 있고, 상기 발견확률정보는 종합평가정보도출단계(S25)에서 도출하는 종합평가정보에도 포함될 수 있다.Specifically, the first output information and the second output information derived through the machine learning model determine whether the answer image contains the relevant answer content for each of one or more derived behavioral indicators corresponding to the answer image performed by the evaluator. The discovery probability information may be included, and the discovery probability information may also be included in the comprehensive evaluation information derived in the general evaluation information derivation step (S25).
더 구체적으로, 상술한 바와 같이 평가자가 스크립트레이어 상에서 스크립트의 특정 영역을 선택하고, 선택한 특정 영역에 상응하는 특정 행동지표를 행동지표리스트영역 상에서 선택하여 제1피평가자가 수행한 답변영상에서 특정 행동지표에 상응하는 특정 답변 내용을 선택하는 것과 같이, 상기 기계학습모델은 발견확률정보를 도출하여 피평가자가 수행한 답변영상에 대하여 해당 답변영상에 상응하는 1 이상의 행동지표 별로 각각의 행동지표와 관련된 답변내용의 발견 가능성을 확률적으로 산출할 수 있다.More specifically, as described above, the evaluator selects a specific area of the script on the script layer, selects a specific behavioral indicator corresponding to the selected specific region on the behavioral indicator list region, and selects a specific behavioral indicator from the response image performed by the first evaluator As in selecting specific answer contents corresponding to can be calculated probabilistically.
또한, 상기 제1출력정보도출부(1512) 및 상기 제2출력정보도출부(1542)는 상기 피평가자가 수행한 답변영상에 대하여 역량정보도출부(1500)에서 답변영상에 기초하여 도출된 텍스트정보 가운데 상기 기계학습모델에서 산출된 1 이상의 도출행동지표 각각에 대한 발견확률정보에 상응하는 특정 텍스트정보를 각각 제1출력정보 및 제2출력정보에 더 포함하여 도출할 수 있다.In addition, the first output information derivation unit 1512 and the second output information derivation unit 1542 are text information derived based on the response image in the competency information derivation unit 1500 with respect to the response image performed by the evaluated person. Among them, specific text information corresponding to the discovery probability information for each of the one or more derived behavior indicators calculated in the machine learning model may be further included in the first output information and the second output information, respectively.
본 발명의 다른 실시예에서 상기 제1출력정보도출부(1512) 및 상기 제2출력정보도출부(1542)는 상기 피평가자가 수행한 답변영상에 기초하여 도출된 텍스트정보 가운데 상기 기계학습모델에서 산출된 1 이상의 도출행동지표 각각에 대한 발견확률정보가 소정의 값을 초과하는 도출행동지표에 상응하는 특정 텍스트정보를 각각 제1출력정보 및 제2출력정보에 더 포함하여 도출할 수도 있다.In another embodiment of the present invention, the first output information derivation unit 1512 and the second output information derivation unit 1542 are calculated from the machine learning model among text information derived based on the answer image performed by the evaluated person. Specific text information corresponding to the derived behavioral indicator in which the discovery probability information for each of the one or more derived behavioral indicators exceeds a predetermined value may be derived by further including the first output information and the second output information, respectively.
본 발명의 또 다른 실시예에서는 1 이상의 도출행동지표 각각에 대한 발견확률정보에 상응하는 특정 텍스트정보는 상기 기계학습모델을 통해 도출될 수도 있다.In another embodiment of the present invention, specific text information corresponding to the discovery probability information for each of one or more derived behavioral indicators may be derived through the machine learning model.
한편, 상기 종합평가정보도출단계(S25)에서 도출하는 종합평가정보는, 상기 제1출력정보도출단계(S21) 및 상기 제2출력정보도출단계(S24)에서 도출된 각각의 도출행동지표에 대한 발견확률정보를 종합하여 산출된 상기 특정 역량에 대한 스코어를 포함할 수 있다.On the other hand, the comprehensive evaluation information derived in the comprehensive evaluation information deriving step (S25) is for each of the derived behavior indicators derived in the first output information deriving step (S21) and the second output information deriving step (S24). The score for the specific competency calculated by synthesizing the discovery probability information may be included.
구체적으로, 상기 종합평가정보도출부(1550)에서 수행하는 종합평가정보도출단계(S25)는 상기 제1출력정보 및 상기 제2출력정보에 기초하여 최종적으로 피평가자의 특정 역량을 평가하는 종합평가정보를 도출한다.Specifically, the comprehensive evaluation information derivation step (S25) performed by the comprehensive evaluation information derivation unit 1550 is comprehensive evaluation information that finally evaluates the specific competency of the person to be evaluated based on the first output information and the second output information to derive
상기 종합평가정보도출부(1550)는 상술한 평가인터페이스에 포함되는 점수평가레이어(L3) 상에서 평가자가 입력하는 평가점수와 같이, 상기 피평가자가 수행한 답변영상에 대하여 스코어를 산출하고, 상기 스코어는 종합평가정보에 포함될 수 있다. 상기 종합평가정보도출부(1550)에서 산출하는 스코어는 상기 평가자가 점수평가레이어(L3) 상에서 입력하는 평가점수와 같이, 기설정된 범위 내에서 특정 간격으로 설정된 복수의 점수 가운데 특정 점수를 스코어로 산출할 수 있다.The comprehensive evaluation information derivation unit 1550 calculates a score for the answer image performed by the evaluator, such as the evaluation score input by the evaluator on the score evaluation layer L3 included in the above-described evaluation interface, and the score is It may be included in the comprehensive evaluation information. The score calculated by the comprehensive evaluation information derivation unit 1550 is calculated as a score among a plurality of scores set at specific intervals within a preset range, such as the evaluation score input by the evaluator on the score evaluation layer L3. can do.
이와 같이 상기 종합평가정보에는 특정 역량에 대한 평가를 점수화한 스코어를 포함하므로, 피평가자에 대한 특정 역량의 보유 정도를 수치화하여 제공할 수 있는 효과를 발휘할 수 있다.As described above, since the comprehensive evaluation information includes a score obtained by grading the evaluation of a specific competency, it is possible to exhibit the effect of quantifying and providing the degree of retention of the specific competency to the evaluated person.
본 발명의 다른 실시예에서는, 상기 종합평가정보도출단계(S25)에서 도출하는 종합평가정보는, 상기 제1출력정보도출단계(S21) 및 상기 제2출력정보도출단계(S24)에서 입력하는 각각의 답변영상에 대해 전처리한 결과 정보를 종합하여 산출된 상기 특정 역량에 대한 스코어를 포함할 수 있다.In another embodiment of the present invention, the comprehensive evaluation information derived in the comprehensive evaluation information deriving step (S25) is input in the first output information deriving step (S21) and the second output information deriving step (S24), respectively. It may include the score for the specific competency calculated by synthesizing the pre-processing result information for the answer image of
구체적으로, 상기 종합평가정보도출단계(S25)에서 도출하는 종합평가정보는 제1출력정보도출단계(S21)에서 상기 기계학습모델에 입력하는 피평가자가 수행한 1 이상의 답변영상 및 제2출력정보제공단계(S24)에서 상기 기계학습모델에 입력하는 심층질문에 대하여 피평가자가 수행한 1 이상의 답변영상을 별도의 기계학습모델에 입력하여 도출되는 해당 피평가자의 특정 역량에 대한 스코어를 포함할 수 있다.Specifically, the comprehensive evaluation information derived in the comprehensive evaluation information deriving step (S25) provides at least one answer image and second output information performed by the subject to be evaluated, which is input to the machine learning model in the first output information deriving step (S21). In step S24, one or more answer images performed by the evaluator to the in-depth question input to the machine learning model may be inputted into a separate machine learning model, and a score for the specific competency of the subject to be derived may be included.
또한, 상기 별도의 기계학습모델에 입력되는 답변영상들은 전처리를 위한 소정의 단계를 거쳐 전처리된 답변영상들이 상기 별도의 기계학습모델에 입력될 수도 있다.In addition, answer images input to the separate machine learning model may be pre-processed through a predetermined step for pre-processing, and answer images may be input to the separate machine learning model.
한편, 제1출력정보 및 제2출력정보를 도출하는 기계학습모델 및 종합평가정보를 도출하는 별도의 기계학습모델은 단일 기계학습모델에 포함될 수 있고, 상기 제1출력정보 및 상기 제2출력정보를 도출하기 위하여 각각의 답변영상들이 상기 단일 기계학습모델에 입력되어, 상기 기계학습모델에서 제1출력정보 및 제2출력정보를 도출하고, 상기 별도의 기계학습모델은 상기 단일 기계학습모델에 입력된 각각의 답변영상 혹은 상기 기계학습모델에서 도출된 제1출력정보 및 제2출력정보에 기초하여 종합평가정보를 도출할 수도 있다.On the other hand, a machine learning model for deriving the first output information and the second output information and a separate machine learning model for deriving the comprehensive evaluation information may be included in a single machine learning model, and the first output information and the second output information Each answer image is input to the single machine learning model in order to derive Comprehensive evaluation information may be derived based on each answer image or the first output information and the second output information derived from the machine learning model.
도 18은 본 발명의 일 실시예에 따른 피평가자가 수행한 답변영상을 입력받은 기계학습모델에서 도출된 특징정보를 더 포함하여 종합평가정보를 도출하는 구성을 개략적으로 도시한다.18 schematically illustrates a configuration for deriving comprehensive evaluation information by further including feature information derived from a machine learning model to which an image of an answer performed by an evaluated person is input according to an embodiment of the present invention.
도 18에 도시된 바와 같이, 상기 종합평가정보도출단계(S25)에서 도출하는 종합평가정보는, 상기 제1출력정보 및 상기 제2출력정보에 포함된 도출행동지표에 대한 발견확률정보, 텍스트정보, 해당 답변영상에 대한 기초스코어정보 및 상기 기계학습모델에서 상기 제1출력정보 및 상기 제2출력정보를 도출하기 위하여 생성된 특징정보 가운데 1 이상의 정보에 기초하여 도출된 상기 특정 역량에 대한 스코어를 포함할 수 있다.As shown in FIG. 18 , the comprehensive evaluation information derived in the comprehensive evaluation information derivation step (S25) includes discovery probability information and text information for the derived behavior indicators included in the first output information and the second output information. , the basic score information for the corresponding answer image and the score for the specific competency derived based on one or more information among the characteristic information generated to derive the first output information and the second output information from the machine learning model may include
구체적으로, 도 18은 피평가자가 수행한 답변영상에 기초하여 종합평가정보를 도출하는 과정을 도시한 도면에 해당한다. 도 18에 도시된 바와 같이, 피평가자에게 제공되는 복수의 질문 각각에 대한 답변영상은 기계학습모델에 입력되고, 기계학습모델은 각각의 답변영상에 대한 역량도출결과에 해당하는 출력정보를 도출한다. 상기 출력정보는 해당 답변영상에 상응하는 도출행동지표에 대한 발견확률정보, 도출행동지표에 대한 텍스트정보 및 기초스코어정보를 포함할 수 있다. 상기 기초스코어정보는 상기 종합평가정보에 포함되는 특정 역량에 대한 스코어와는 상이하게 해당 단일 답변영상에 상응하는 스코어에 해당할 수 있다.Specifically, FIG. 18 corresponds to a diagram illustrating a process of deriving comprehensive evaluation information based on the answer image performed by the evaluated. As shown in FIG. 18 , an image of an answer to each of a plurality of questions provided to the evaluator is input to the machine learning model, and the machine learning model derives output information corresponding to the capability derivation result for each answer image. The output information may include discovery probability information on the derived behavioral indicator corresponding to the corresponding answer image, text information on the derived behavioral indicator, and basic score information. The basic score information may correspond to a score corresponding to the single answer image differently from the score for a specific competency included in the comprehensive evaluation information.
이와 같이, 상기 출력정보를 도출하는 과정은 상술한 제1출력정보도출단계(S12) 및 제2출력정보도출단계(S24)에서 수행될 수 있다. 상기 피평가자에게 제공되는 복수의 질문에는 도 17에서 설명한 바와 같이, 서버시스템에서 도출된 심층질문을 포함한다. 한편, 답변영상을 입력받은 기계학습모델은 역량도출결과에 해당하는 출력정보를 도출하기 위하여 입력받은 답변영상에 대한 특징정보를 1차적으로 도출하고, 해당 특징정보에 기초하여 상기 출력정보를 도출한다. 기계학습모델에서 특징정보를 도출하는 것에 대해서는 도 19에서 후술하도록 한다.In this way, the process of deriving the output information may be performed in the above-described first output information deriving step (S12) and second output information deriving step (S24). The plurality of questions provided to the evaluator include in-depth questions derived from the server system, as described with reference to FIG. 17 . On the other hand, the machine learning model that received the answer image primarily derives feature information about the received answer image in order to derive output information corresponding to the capability deduction result, and derives the output information based on the feature information . Deriving feature information from the machine learning model will be described later with reference to FIG. 19 .
한편, 상기 종합평가정보도출단계(S25)에서는 도출된 1 이상의 출력정보 및 각각의 답변영상에 대하여 기계학습모델에서 도출된 각각의 특징정보에 기초하여 종합평가정보를 도출할 수 있고, 상기 종합평가정보는 평가하고자 하는 특정 역량에 대한 스코어를 포함할 수 있다.On the other hand, in the comprehensive evaluation information derivation step (S25), comprehensive evaluation information can be derived based on each characteristic information derived from the machine learning model with respect to one or more output information derived and each answer image, and the comprehensive evaluation The information may include scores for specific competencies to be assessed.
더 구체적으로, 상기 종합평가정보도출단계(S25)에서는 별도의 기계학습모델에 상기 제1출력정보도출단계(S21) 및 상기 제2출력정보도출단계(S24)에서 도출된 출력정보 및 상기 제1출력정보도출단계(S21) 및 상기 제2출력정보도출단계(S24)에서 사용하는 기계학습모델에서 도출된 특징정보를 입력하고, 상기 별도의 기계학습모델은 해당 피평가자의 특정 역량에 대한 스코어를 포함하는 종합평가정보를 도출한다.More specifically, in the comprehensive evaluation information deriving step (S25), the output information derived in the first output information deriving step (S21) and the second output information deriving step (S24) and the first Input the feature information derived from the machine learning model used in the output information derivation step (S21) and the second output information derivation step (S24), and the separate machine learning model includes a score for the specific competency of the subject to be evaluated to derive comprehensive evaluation information.
상기 종합평가정보도출단계(S25)에서의 별도의 기계학습모델은 상술한 제1출력정보도출단계(S21) 및 제2출력정보도출단계(S24)에서의 기계학습모델과 상이하나, 본 발명의 다른 실시예에서는 전체기계학습모델에 상기 기계학습모델 및 상기 별도의 기계학습모델을 포함하여, 상기 제1출력정보도출단계(S21), 상기 제2출력정보도출단계(S24) 및 종합평가정보도출단계(S25)에서는 상기 전체기계학습모델을 통해 출력정보 및 종합평가정보를 도출할 수도 있다. 또한 상기 별도의 기계학습모델은 딥러닝 방식의 기계학습을 수행하여 종합평가정보를 도출하거나, 혹은 앙상블 학습 방식의 기계학습을 수행하여 종합평가정보를 도출할 수도 있다.The separate machine learning model in the comprehensive evaluation information deriving step (S25) is different from the machine learning model in the above-described first output information deriving step (S21) and second output information deriving step (S24). In another embodiment, including the machine learning model and the separate machine learning model in the entire machine learning model, the first output information derivation step (S21), the second output information derivation step (S24) and comprehensive evaluation information derivation In step S25, output information and comprehensive evaluation information may be derived through the entire machine learning model. In addition, the separate machine learning model may derive comprehensive evaluation information by performing machine learning of a deep learning method, or may derive comprehensive evaluation information by performing machine learning of an ensemble learning method.
한편, 상기 종합평가정보도출단계(S25)에서는 출력정보 및 특징정보를 상기 별도의 기계학습모델에 입력하거나 혹은 출력정보에 포함되는 발견확률정보, 텍스트정보, 해당 답변영상에 대한 기초스코어정보 및 특징정보 가운데 1 이상의 정보를 입력하여 상기 종합평가정보를 도출할 수도 있다.On the other hand, in the comprehensive evaluation information derivation step (S25), output information and feature information are input to the separate machine learning model, or discovery probability information, text information, and basic score information and characteristics of the output information included in the output information. The comprehensive evaluation information may be derived by inputting one or more pieces of information among the information.
이와 같이, 본 발명의 일 실시예에서는 상기 종합평가정보도출단계(S25)에서 종합평가정보를 도출하기 위하여 별도의 기계학습모델에 기계학습모델에서 도출한 특징정보를 입력요소로 사용함으로써, 더욱 정확한 역량 평가 결과를 도출할 수 있는 효과를 발휘할 수 있다.As such, in one embodiment of the present invention, in order to derive the comprehensive evaluation information in the comprehensive evaluation information deriving step (S25), by using the feature information derived from the machine learning model as an input element in a separate machine learning model, more accurate It can exert the effect of deriving the results of competency evaluation.
도 19는 본 발명의 일 실시예에 따른 피처추출모델의 내부 구성을 개략적으로 도시한다.19 schematically shows the internal configuration of a feature extraction model according to an embodiment of the present invention.
상술한 기계학습모델은 피처추출모델 및 피처추론모델을 포함할 수 있고, 도 19에 도시된 실시예에 따른 상기 피처추출모델은, 상기 피평가자가 수행한 답변영상의 복수의 프레임의 영상정보로부터 복수의 영상특징정보를 도출하는 공간 특징정보를 추출하는 제1딥뉴럴네트워크; 상기 피평가자가 수행한 답변영상의 음성정보로부터 복수의 음성특징정보를 도출하는 공간 특징정보를 추출하는 제2딥뉴럴네트워크; 상기 복수의 영상특징정보를 수신하여 제1특징정보를 도출하는 제1순환신경망모듈; 및 상기 복수의 음성특징정보를 수신하여 제2특징정보를 도출하는 제2순환신경망모듈; 상기 답변영상의 음성정보를 Speech to Text(STT) 변환하거나 혹은 서버시스템(1000)의 관리자 등으로부터 상기 답변영상에 기초하여 입력받은 스크립트를 수신하여 제3특징정보를 도출하는 제3순환신경망모듈;을 포함할 수 있다.The above-described machine learning model may include a feature extraction model and a feature inference model, and the feature extraction model according to the embodiment shown in FIG. a first deep neural network for extracting spatial feature information for deriving image feature information of ; a second deep neural network for extracting spatial feature information for deriving a plurality of voice feature information from the voice information of the answer image performed by the evaluator; a first cyclic neural network module for receiving the plurality of image feature information and deriving first feature information; and a second recurrent neural network module for receiving the plurality of voice feature information and deriving second feature information; a third cyclic neural network module for deriving third characteristic information by converting the voice information of the response image to Speech to Text (STT) or receiving a script input based on the response image from an administrator of the server system 1000, etc.; may include
상기 제1딥뉴럴네트워크 및 상기 제2딥뉴럴네트워크는 CNN모듈 등이 이에 해당할 수 있고, 도 19에 도시된 일 실시예에서는 상기 제1딥뉴럴네트워크는 제1CNN모듈에 해당하고, 제2딥뉴럴네트워크는 제2CNN모듈에 해당할 수 있다.The first deep neural network and the second deep neural network may correspond to a CNN module and the like, and in the embodiment shown in FIG. 19, the first deep neural network corresponds to the first CNN module, and the second deep The neural network may correspond to the second CNN module.
상기 제1순환신경망모듈, 제2순환신경망모듈 및 제3순환신경망모듈은 RNN모듈에 포함되는 LSTM모듈 등이 이에 해당할 수 있고, 도 19에 도시된 일 실시예에서는 제1순환신경망모듈은 제1LSTM모듈에 해당하고, 제2순환신경망모듈은 제2LSTM모듈에 해당하며, 제3순환신경망모듈은 제3LSTM모듈에 해당할 수 있다.The first recurrent neural network module, the second recurrent neural network module, and the third recurrent neural network module may correspond to the LSTM module included in the RNN module, and in the embodiment shown in FIG. 19, the first recurrent neural network module is 1 LSTM module, the second recurrent neural network module may correspond to the second LSTM module, and the third recurrent neural network module may correspond to the third LSTM module.
이하에서는 도 19에 도시된 실시예에 기초하여, 본 발명의 일 실시예에 따른 뉴럴네트워크의 동작에 대하여 설명하도록 한다.Hereinafter, an operation of a neural network according to an embodiment of the present invention will be described based on the embodiment shown in FIG. 19 .
상기 복수의 프레임은 기설정된 시간간격으로 영상의 이미지를 분할하여 생성될 수 있다. 또한, 제1CNN모듈에 의하여 도출된 복수의 영상특징정보는 제1LSTM모듈로 시계열 순으로 입력됨이 바람직하다.The plurality of frames may be generated by dividing an image of an image at preset time intervals. In addition, the plurality of image feature information derived by the first CNN module is preferably input to the first LSTM module in chronological order.
한편, 기설정된 시간구간에 대한 음성에 대한 특징정보(피치, 세기 등), 혹은 음성 자체의 데이터는 제2CNN모듈로 입력되고, 제2CNN모듈로부터 도출된 음성특징정보는 제2LSTM모듈로 시계열 순으로 입력됨이 바람직하다.On the other hand, the characteristic information (pitch, intensity, etc.) of the voice for a preset time section, or the data of the voice itself, is input to the second CNN module, and the voice feature information derived from the second CNN module is sent to the second LSTM module in time series order. input is preferred.
또한, 상기 음성에 대한 특징정보에는 피치 혹은 세기에 해당할 수 있으나, 더욱 바람직하게는 상기 음성을 일정한 구간으로 나누어, 각 구간에 대한 스펙트럼을 Mel Filter Bank를 적용하여 Cepstral 분석을 통해 특징을 추출하는 Mel-Frequency Cepstral Coefficient(MFCC)가 해당될 수 있다.In addition, the feature information for the voice may correspond to pitch or intensity, but more preferably, the voice is divided into certain sections, and the spectrum for each section is applied to Mel Filter Bank to extract features through Cepstral analysis. Mel-Frequency Cepstral Coefficient (MFCC) may be applicable.
피처추출모델에 입력되는 스크립트는 바람직하게는 해당 스크립트를 토큰 단위로 임베딩된 벡터에 해당될 수 있다.Preferably, the script input to the feature extraction model may correspond to a vector in which the corresponding script is embedded in token units.
한편, 피처추출모델의 출력에 해당하는 특징정보(벡터열)는 상기 제1세부특징정보, 상기 제2세부특징정보 및 상기 제3세부특징정보에 기초하여 도출된다. 가장 간단한 방법으로는 상기 제1세부특징정보, 상기 제2세부특징정보 및 제3세부특징정보를 단순 결합하여 상기 특징정보를 도출할 수 있고, 혹은 상기 제1세부특징정보, 상기 제2세부특징정보 및 상기 제3세부특징정보에 가중치 등을 적용하여 상기 특징정보를 도출할 수도 있다.Meanwhile, feature information (a vector sequence) corresponding to the output of the feature extraction model is derived based on the first detailed feature information, the second detailed feature information, and the third detailed feature information. In the simplest method, the characteristic information may be derived by simply combining the first detailed characteristic information, the second detailed characteristic information, and the third detailed characteristic information, or the first detailed characteristic information and the second detailed characteristic information The characteristic information may be derived by applying a weight or the like to the information and the third detailed characteristic information.
도 20은 본 발명의 일 실시예에 따른 피처추론모델의 내부 구성을 개략적으로 도시한다.20 schematically shows an internal configuration of a feature inference model according to an embodiment of the present invention.
도 20에 도시된 바와 같이, 피처추론모델은 피처추출모델로부터 도출된 특징정보에 대해 복수의 Fully Connected Layer에 의하여 학습된 가중치를 부여하여 중간결과(Representative Vector)를 도출하는 과정을 수행하여 제2피평가자가 수행한 답변영상에 대한 결과값을 도출한다.As shown in FIG. 20, the feature inference model applies a weight learned by a plurality of Fully Connected Layers to the feature information derived from the feature extraction model to derive an intermediate result (Representative Vector), and the second The result value for the response image performed by the evaluated is derived.
예를 들어, 상술한 기계학습모델은 피평가자가 수행한 답변영상을 분석하여 해당 답변영상에 상응하는 특정 역량에 대한 피평가자의 해당 역량의 보유 정도에 대한 정보를 도출할 수 있다.For example, the above-described machine learning model may analyze the response image performed by the evaluated person to derive information on the degree of possessing the competency of the evaluated person for a specific competency corresponding to the corresponding response image.
상기 Fully Connected Layer의 개수는 도 20에 도시된 개수에 한정되지 아니하고, 상기 피처추론모델은 1 이상의 Fully Connected Layer를 포함할 수 있다. 상기 피처추론모델이 단일의 Fully Connected Layer로 이루어진 경우에 상기 중간결과는 생략될 수도 있다.The number of the fully connected layers is not limited to the number shown in FIG. 20, and the feature inference model may include one or more fully connected layers. When the feature inference model consists of a single fully connected layer, the intermediate result may be omitted.
한편, 본 발명의 다른 실시예에서 상기 피처추론모델은 Softmax 활성화 함수를 사용하여 기설정된 판별 기준에 따라 분류하도록 하는 문제를 처리하거나 Sigmoid 활성화 함수 등을 이용하여 점수를 도출하는 방식으로 구현될 수도 있다.On the other hand, in another embodiment of the present invention, the feature inference model may be implemented in such a way that it uses a Softmax activation function to handle the problem of classifying according to a preset criterion, or derives a score using a sigmoid activation function, etc. .
도 21은 본 발명의 일 실시예에 따른 컴퓨팅장치의 내부 구성을 개략적으로 도시한다.21 schematically illustrates an internal configuration of a computing device according to an embodiment of the present invention.
상술한 도 1에 도시된 서버시스템(1000)은 상기 도 21에 도시된 컴퓨팅장치의 구성요소들을 포함할 수 있다.The above-described server system 1000 illustrated in FIG. 1 may include components of the computing device illustrated in FIG. 21 .
도 21에 도시된 바와 같이, 컴퓨팅장치(11000)는 적어도 하나의 프로세서(processor)(11100), 메모리(memory)(11200), 주변장치 인터페이스(peripheral interface)(11300), 입/출력 서브시스템(I/Osubsystem)(11400), 전력 회로(11500) 및 통신 회로(11600)를 적어도 포함할 수 있다. 이때, 컴퓨팅장치(11000)는 도 1에 도시된 서버시스템(1000) 혹은 상기 서버시스템(1000)에 포함되는 1 이상의 서버에 해당될 수 있다.21, the computing device 11000 includes at least one processor 11100, a memory 11200, a peripheral interface 11300, an input/output subsystem ( I/O subsystem) 11400 , a power circuit 11500 and a communication circuit 11600 may be included at least. In this case, the computing device 11000 may correspond to the server system 1000 illustrated in FIG. 1 or one or more servers included in the server system 1000 .
메모리(11200)는 일례로 고속 랜덤 액세스 메모리(high-speed random access memory), 자기 디스크, 에스램(SRAM), 디램(DRAM), 롬(ROM), 플래시 메모리 또는 비휘발성 메모리를 포함할 수 있다. 메모리(11200)는 컴퓨팅장치(11000)의 동작에 필요한 소프트웨어 모듈, 명령어 집합 또는 그밖에 다양한 데이터를 포함할 수 있다.The memory 11200 may include, for example, high-speed random access memory, magnetic disk, SRAM, DRAM, ROM, flash memory, or non-volatile memory. . The memory 11200 may include a software module, an instruction set, or other various data required for the operation of the computing device 11000 .
이때, 프로세서(11100)나 주변장치 인터페이스(11300) 등의 다른 컴포넌트에서 메모리(11200)에 액세스하는 것은 프로세서(11100)에 의해 제어될 수 있다.In this case, access to the memory 11200 from other components such as the processor 11100 or the peripheral device interface 11300 may be controlled by the processor 11100 .
주변장치 인터페이스(11300)는 컴퓨팅장치(11000)의 입력 및/또는 출력 주변장치를 프로세서(11100) 및 메모리 (11200)에 결합시킬 수 있다. 프로세서(11100)는 메모리(11200)에 저장된 소프트웨어 모듈 또는 명령어 집합을 실행하여 컴퓨팅장치(11000)을 위한 다양한 기능을 수행하고 데이터를 처리할 수 있다. Peripheral interface 11300 may couple input and/or output peripherals of computing device 11000 to processor 11100 and memory 11200 . The processor 11100 may execute a software module or an instruction set stored in the memory 11200 to perform various functions for the computing device 11000 and process data.
입/출력 서브시스템은 다양한 입/출력 주변장치들을 주변장치 인터페이스(11300)에 결합시킬 수 있다. 예를 들어, 입/출력 서브시스템은 모니터나 키보드, 마우스, 프린터 또는 필요에 따라 터치스크린이나 센서 등의 주변장치를 주변장치 인터페이스(11300)에 결합시키기 위한 컨트롤러를 포함할 수 있다. 다른 측면에 따르면, 입/출력 주변장치들은 입/출력 서브시스템을 거치지 않고 주변장치 인터페이스(11300)에 결합될 수도 있다.The input/output subsystem may couple various input/output peripherals to the peripheral interface 11300 . For example, the input/output subsystem may include a controller for coupling a peripheral device such as a monitor or keyboard, mouse, printer, or a touch screen or sensor as required to the peripheral interface 11300 . According to another aspect, input/output peripherals may be coupled to peripheral interface 11300 without going through an input/output subsystem.
전력 회로(11500)는 단말기의 컴포넌트의 전부 또는 일부로 전력을 공급할 수 있다. 예를 들어 전력 회로(11500)는 전력 관리 시스템, 배터리나 교류(AC) 등과 같은 하나 이상의 전원, 충전 시스템, 전력 실패 감지 회로(power failure detection circuit), 전력 변환기나 인버터, 전력 상태 표시자 또는 전력 생성, 관리, 분배를 위한 임의의 다른 컴포넌트들을 포함할 수 있다.The power circuit 11500 may supply power to all or some of the components of the terminal. For example, the power circuit 11500 may include a power management system, one or more power sources such as batteries or alternating current (AC), a charging system, a power failure detection circuit, a power converter or inverter, a power status indicator, or a power source. It may include any other components for creation, management, and distribution.
통신 회로(11600)는 적어도 하나의 외부 포트를 이용하여 다른 컴퓨팅장치와 통신을 가능하게 할 수 있다.The communication circuit 11600 may enable communication with another computing device using at least one external port.
또는 상술한 바와 같이 필요에 따라 통신 회로(11600)는 RF 회로를 포함하여 전자기 신호(electromagnetic signal)라고도 알려진 RF 신호를 송수신함으로써, 다른 컴퓨팅장치와 통신을 가능하게 할 수도 있다.Alternatively, as described above, if necessary, the communication circuit 11600 may transmit and receive an RF signal, also known as an electromagnetic signal, including an RF circuit, thereby enabling communication with other computing devices.
이러한 도 21의 실시예는, 컴퓨팅장치(11000)의 일례일 뿐이고, 컴퓨팅장치(11000)는 도 21에 도시된 일부 컴포넌트가 생략되거나, 도 21에 도시되지 않은 추가의 컴포넌트를 더 구비하거나, 2개 이상의 컴포넌트를 결합시키는 구성 또는 배치를 가질 수 있다. 예를 들어, 모바일 환경의 통신 단말을 위한 컴퓨팅장치는 도 21에 도시된 컴포넌트들 외에도, 터치스크린이나 센서 등을 더 포함할 수도 있으며, 통신 회로(11600)에 다양한 통신방식(WiFi, 3G, LTE, Bluetooth, NFC, Zigbee 등)의 RF 통신을 위한 회로가 포함될 수도 있다. 컴퓨팅장치(11000)에 포함 가능한 컴포넌트들은 하나 이상의 신호 처리 또는 어플리케이션에 특화된 집적 회로를 포함하는 하드웨어, 소프트웨어, 또는 하드웨어 및 소프트웨어 양자의 조합으로 구현될 수 있다.This embodiment of FIG. 21 is only an example of the computing device 11000, and the computing device 11000 may omit some components shown in FIG. 21, or further include additional components not shown in FIG. 21, or 2 It may have a configuration or arrangement that combines two or more components. For example, a computing device for a communication terminal in a mobile environment may further include a touch screen or a sensor in addition to the components shown in FIG. 21 , and various communication methods (WiFi, 3G, LTE) are provided in the communication circuit 11600 . , Bluetooth, NFC, Zigbee, etc.) may include a circuit for RF communication. Components that may be included in the computing device 11000 may be implemented as hardware, software, or a combination of both hardware and software including an integrated circuit specialized for one or more signal processing or applications.
본 발명의 실시예에 따른 방법들은 다양한 컴퓨팅장치를 통하여 수행될 수 있는 프로그램 명령(instruction) 형태로 구현되어 컴퓨터 판독 가능 매체에 기록될 수 있다. 특히, 본 실시예에 따른 프로그램은 PC 기반의 프로그램 또는 모바일 단말 전용의 어플리케이션으로 구성될 수 있다. 본 발명이 적용되는 어플리케이션은 파일 배포 시스템이 제공하는 파일을 통해 사용자단말 혹은 가맹점단말에 설치될 수 있다. 일 예로, 파일 배포 시스템은 사용자단말 혹은 가맹점단말에서의 요청에 따라 상기 파일을 전송하는 파일 전송부(미도시)를 포함할 수 있다.Methods according to an embodiment of the present invention may be implemented in the form of program instructions that can be executed through various computing devices and recorded in a computer-readable medium. In particular, the program according to the present embodiment may be configured as a PC-based program or an application dedicated to a mobile terminal. The application to which the present invention is applied may be installed in the user terminal or the affiliated store terminal through the file provided by the file distribution system. For example, the file distribution system may include a file transmission unit (not shown) that transmits the file in response to a request from a user terminal or an affiliated store terminal.
이상에서 설명된 장치는 하드웨어 구성요소, 소프트웨어 구성요소, 및/또는 하드웨어 구성요소 및 소프트웨어구성요소의 조합으로 구현될 수 있다. 예를 들어, 실시예들에서 설명된 장치 및 구성요소는, 예를 들어, 프로세서, 콘트롤러, ALU(arithmetic logic unit), 디지털 신호 프로세서(digital signal processor), 마이크로컴퓨터, FPGA(field programmable gate array), PLU(programmable logic unit), 마이크로프로세서, 또는 명령(instruction)을 실행하고 응답할 수 있는 다른 어떠한 장치와 같이, 하나 이상의 범용 컴퓨터 또는 특수 목적컴퓨터를 이용하여 구현될 수 있다. 처리 장치는 운영 체제(OS) 및 상기 운영 체제 상에서 수행되는 하나 이상의 소프트웨어 어플리케이션을 수행할 수 있다. 또한, 처리 장치는 소프트웨어의 실행에 응답하여, 데이터를 접근, 저장, 조작, 처리 및 생성할 수도 있다. 이해의 편의를 위하여, 처리 장치는 하나가 사용되는 것으로 설명된 경우도 있지만, 해당 기술분야에서 통상의 지식을 가진 자는, 처리 장치가 복수 개의 처리 요소(processing element) 및/또는 복수 유형의 처리 요소를 포함할 수 있음을 알 수 있다. 예를 들어, 처리 장치는 복수 개의 프로세서 또는 하나의 프로세서 및 하나의 콘트롤러를 포함할 수 있다. 또한, 병렬 프로세서(parallel processor)와 같은, 다른 처리 구성(processing configuration)도 가능하다.The device described above may be implemented as a hardware component, a software component, and/or a combination of the hardware component and the software component. For example, devices and components described in the embodiments may include, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA). , a programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions, may be implemented using one or more general purpose or special purpose computers. The processing device may execute an operating system (OS) and one or more software applications executed on the operating system. A processing device may also access, store, manipulate, process, and generate data in response to execution of the software. For convenience of understanding, although one processing device is sometimes described as being used, one of ordinary skill in the art will recognize that the processing device includes a plurality of processing elements and/or a plurality of types of processing elements. It can be seen that can include For example, the processing device may include a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as parallel processors.
소프트웨어는 컴퓨터 프로그램(computer program), 코드(code), 명령(instruction), 또는 이들 중 하나 이상의 조합을 포함할 수 있으며, 원하는 대로 동작하도록 처리 장치를 구성하거나 독립적으로 또는 결합적으로 (collectively) 처리 장치를 명령할 수 있다. 소프트웨어 및/또는 데이터는, 처리 장치에 의하여 해석되거나 처리 장치에 명령 또는 데이터를 제공하기 위하여, 어떤 유형의 기계, 구성요소(component), 물리적 장치, 가상장치(virtual equipment), 컴퓨터 저장 매체 또는 장치, 또는 전송되는 신호 파(signal wave)에 영구적으로, 또는 일시적으로 구체화(embody)될 수 있다. 소프트웨어는 네트워크로 연결된 컴퓨팅장치 상에 분산되어서, 분산된 방법으로 저장되거나 실행될 수도 있다. 소프트웨어 및 데이터는 하나 이상의 컴퓨터 판독 가능 기록 매체에 저장될 수 있다.Software may comprise a computer program, code, instructions, or a combination of one or more thereof, which configures a processing device to operate as desired or is independently or collectively processed You can command the device. The software and/or data may be any kind of machine, component, physical device, virtual equipment, computer storage medium or device, to be interpreted by or to provide instructions or data to the processing device. , or may be permanently or temporarily embody in a transmitted signal wave. The software may be distributed over networked computing devices, and may be stored or executed in a distributed manner. Software and data may be stored in one or more computer-readable recording media.
실시예에 따른 방법은 다양한 컴퓨터 수단을 통하여 수행될 수 있는 프로그램 명령 형태로 구현되어 컴퓨터 판독 가능 매체에 기록될 수 있다. 상기 컴퓨터 판독 가능 매체는 프로그램 명령, 데이터 파일, 데이터 구조 등을 단독으로 또는 조합하여 포함할 수 있다. 상기 매체에 기록되는 프로그램 명령은 실시예를 위하여 특별히 설계되고 구성된 것들이거나 컴퓨터 소프트웨어 당업자에게 공지되어 사용 가능한 것일 수도 있다. 컴퓨터 판독 가능 기록 매체의 예에는 하드 디스크, 플로피 디스크 및 자기 테이프와 같은 자기 매체(magnetic media), CD-ROM, DVD와 같은 광기록 매체(optical media), 플롭티컬 디스크(floptical disk)와 같은 자기-광 매체(magneto-optical media), 및 롬(ROM), 램(RAM), 플래시 메모리 등과 같은 프로그램 명령을 저장하고 수행하도록 특별히 구성된 하드웨어 장치가 포함된다. 프로그램 명령의 예에는 컴파일러에 의해 만들어지는 것과 같은 기계어 코드뿐만 아니라 인터프리터 등을 사용해서 컴퓨터에 의해서 실행될 수 있는 고급 언어 코드를 포함한다. 상기된 하드웨어 장치는 실시예의 동작을 수행하기 위해 하나 이상의 소프트웨어 모듈로서 작동하도록 구성될 수 있으며, 그 역도 마찬가지이다.The method according to the embodiment may be implemented in the form of program instructions that can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, etc. alone or in combination. The program instructions recorded on the medium may be specially designed and configured for the embodiment, or may be known and available to those skilled in the art of computer software. Examples of the computer-readable recording medium include magnetic media such as hard disks, floppy disks and magnetic tapes, optical media such as CD-ROMs and DVDs, and magnetic such as floppy disks. - includes magneto-optical media, and hardware devices specially configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like. Examples of program instructions include not only machine language codes such as those generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.
본 발명의 일 실시예에 따르면, 특정 역량에 대한 평가를 수행하기 위한 기계학습모델을 통해 피평가자의 답변영상에 기초하여 평가결과를 도출하므로, 평가에 소요되는 시간 및 비용을 절감함과 동시에 객관적인 평가결과를 도출할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, since the evaluation result is derived based on the image of the respondent's answer through the machine learning model for performing the evaluation of a specific competency, the time and cost required for the evaluation are reduced and the objective evaluation is at the same time It can have an effect that can lead to results.
본 발명의 일 실시예에 따르면, 평가인터페이스제공단계에서 평가자에게 제공되는 평가인터페이스는 스크립트레이어를 포함하고, 스크립트레이어에는 피평가자의 답변영상에 따른 스크립트가 표시되어, 평가자가 피평가자의 답변을 용이하게 인지할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, the evaluation interface provided to the evaluator in the step of providing the evaluation interface includes a script layer, and the script layer displays a script according to the evaluator's answer image, so that the evaluator easily recognizes the evaluator's answer can be effective.
본 발명의 일 실시예에 따르면, 스크립트레이어는 평가자가 스크립트의 특정 영역을 선택하는 경우, 해당 질문 혹은 특정 역량에 대한 행동지표리스트영역이 표시되므로, 피평가자가 선택한 특정 영역에 대해 상응하는 행동지표를 평가자가 용이하게 선택할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, in the script layer, when the evaluator selects a specific area of the script, the behavior index list area for the corresponding question or specific competency is displayed. It is possible to exert the effect that the evaluator can easily select.
본 발명의 일 실시예에 따르면, 평가인터페이스는 평가자가 스크립트레이어에서 선택한 스크립트의 특정 영역 및 행동지표리스트영역에서 선택한 행동지표리스트영역의 특정 행동지표리스트가 표시되는 행동지표레이어를 포함하므로, 평가자가 각 행동지표 별 피평가자의 답변을 용이하게 파악할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, the evaluation interface includes a behavior indicator layer in which a specific region of a script selected by the evaluator from the script layer and a specific behavior indicator list of a behavior indicator list region selected from the behavior indicator list region is displayed, so that the evaluator It is possible to exert the effect of easily grasping the respondent's answer for each behavioral indicator.
본 발명의 일 실시예에 따르면, 평가인터페이스는 평가자로 하여금 피평가자의 답변영상에 따른 심층질문을 입력받는 심층질문레이어 및 피평가자의 답변영상에 대한 특이사항을 입력받는 특이사항레이어를 포함하므로, 평가자가 해당 평가방법에 대한 교육을 받는 경우, 해당 평가방법에 대한 전문가가 작성한 심층질문 및 특이사항을 비교해볼 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, the evaluation interface includes an in-depth question layer to which the evaluator receives an in-depth question according to the answer image of the evaluator and a singularity layer to receive special items about the answer image of the evaluator, so that the evaluator In the case of receiving education on the evaluation method, it can exert the effect of comparing in-depth questions and peculiarities written by experts on the evaluation method.
본 발명의 일 실시예에 따르면, 피평가자의 답변영상에서 영상정보 및 음성정보를 분리하여, 각각의 영상정보 및 음성정보를 기계학습모델에 입력하여 평가결과를 도출하므로, 피평가자의 답변영상에서의 맥락 및 답변 의도를 세부적으로 파악하여 정확한 평가결과를 도출할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, since the evaluation result is derived by separating the image information and the audio information from the answer image of the evaluated person, and inputting the respective image information and the audio information into the machine learning model, the context in the answer image of the evaluated person And it is possible to exert the effect of deriving an accurate evaluation result by grasping the intent of the answer in detail.
본 발명의 일 실시예에 따르면, 기계학습모델을 통해 역량정보도출단계에서 도출하는 제2피평가자역량정보는 행동지표 각각에 대한 발견확률정보를 포함하므로, 평가결과를 객관적으로 제공할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, since the second evaluator capability information derived in the capability information derivation step through the machine learning model includes the discovery probability information for each behavioral indicator, the effect of objectively providing the evaluation result is obtained. can perform
본 발명의 일 실시예에 따르면, 기계학습모델을 통해 역량정보도출단계에서 도출하는 제2피평가자역량정보는 행동지표 각각에 대한 발견확률정보에 상응하는 피평가자의 답변영상에서의 텍스트정보를 더 포함하므로, 행동지표에 상응하는 피평가자의 답변을 구체적으로 제공할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, the second assessee capability information derived from the capability information derivation step through the machine learning model further includes text information in the respondent's answer image corresponding to the discovery probability information for each behavioral indicator. In other words, it can have the effect of providing concretely the respondent's answer corresponding to the behavioral indicator.
본 발명의 일 실시예에 따르면, 제1출력정보도출단계에서 도출한 제1출력정보에 포함된 도출행동지표 및 특정 역량에 대한 복수의 행동지표에 기초하여 심층질문을 설정하므로, 평가자 없이도 관찰되지 않은 행동지표에 대한 답변을 이끌어낼 수 있는 심층질문을 피평가자에게 제공할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, since an in-depth question is set based on the derived behavioral indicator included in the first output information derived in the first output information derivation step and a plurality of behavioral indicators for specific capabilities, it is not observed without an evaluator It can have the effect of providing in-depth questions that can lead to answers to behavioral indicators that are not yet evaluated.
본 발명의 일 실시예에 따르면, 역량평가단계에서 제1출력정보 및 심층질문에 대한 피평가자가 수행한 답변영상을 추가적으로 분석하여 도출된 제2출력정보에 기초하여 종합평가정보를 도출하므로, 더욱 정확한 평가결과를 도출할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, since comprehensive evaluation information is derived based on the second output information derived by additionally analyzing the first output information and the answer image performed by the evaluator to the in-depth question in the competency evaluation step, more accurate It can exert the effect of deriving evaluation results.
본 발명의 일 실시예에 따르면, 종합평가정보도출단계에서 도출하는 종합평가정보에는 제1출력정보 및 제2출력정보에서의 발견확률정보를 종합하여 산출된 특정 역량에 대한 스코어를 포함하므로 직관적으로 피평가자의 평가결과를 인지할 수 있는 효과를 발휘할 수 있다.According to an embodiment of the present invention, since the comprehensive evaluation information derived in the comprehensive evaluation information derivation step includes a score for a specific competency calculated by synthesizing the discovery probability information in the first output information and the second output information, intuitively It can exert the effect of recognizing the evaluation result of the person being evaluated.
본 발명의 일 실시예는 컴퓨터에 의해 실행되는 프로그램 모듈과 같은 컴퓨터에 의해 실행가능한 명령어를 포함하는 기록 매체의 형태로도 구현될 수 있다. 컴퓨터 판독 가능 매체는 컴퓨터에 의해 액세스될 수 있는 임의의 가용 매체일 수 있고, 휘발성 및 비휘발성 매체, 분리형 및 비분리형 매체를 모두 포함한다. 또한, 컴퓨터 판독가능 매체는 컴퓨터 저장 매체 및 통신 매체를 모두 포함할 수 있다. 컴퓨터 저장 매체는 컴퓨터 판독가능 명령어, 데이터 구조, 프로그램 모듈 또는 기타 데이터와 같은 정보의 저장을 위한 임의의 방법 또는 기술로 구현된 휘발성 및 비휘발성, 분리형 및 비분리형 매체를 모두 포함한다. 통신 매체는 전형적으로 컴퓨터 판독가능 명령어, 데이터 구조, 프로그램 모듈, 또는 반송파와 같은 변조된 데이터 신호의 기타 데이터, 또는 기타 전송 메커니즘을 포함하며, 임의의 정보 전달 매체를 포함한다. An embodiment of the present invention may also be implemented in the form of a recording medium including instructions executable by a computer, such as a program module to be executed by a computer. Computer-readable media can be any available media that can be accessed by a computer and includes both volatile and nonvolatile media, removable and non-removable media. In addition, computer-readable media may include both computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Communication media typically includes computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave, or other transport mechanism, and includes any information delivery media.
본 발명의 방법 및 시스템은 특정 실시예와 관련하여 설명되었지만, 그것들의 구성 요소 또는 동작의 일부 또는 전부는 범용 하드웨어 아키텍쳐를 갖는 컴퓨터 시스템을 사용하여 구현될 수 있다.Although the methods and systems of the present invention have been described with reference to specific embodiments, some or all of their components or operations may be implemented using a computer system having a general purpose hardware architecture.
전술한 본 발명의 설명은 예시를 위한 것이며, 본 발명이 속하는 기술분야의 통상의 지식을 가진 자는 본 발명의 기술적 사상이나 필수적인 특징을 변경하지 않고서 다른 구체적인 형태로 쉽게 변형이 가능하다는 것을 이해할 수 있을 것이다. 그러므로 이상에서 기술한 실시예들은 모든 면에서 예시적인 것이며 한정적이 아닌 것으로 이해해야만 한다. 예를 들어, 단일형으로 설명되어 있는 각 구성 요소는 분산되어 실시될 수도 있으며, 마찬가지로 분산된 것으로 설명되어 있는 구성 요소들도 결합된 형태로 실시될 수 있다.The above description of the present invention is for illustration, and those of ordinary skill in the art to which the present invention pertains can understand that it can be easily modified into other specific forms without changing the technical spirit or essential features of the present invention. will be. Therefore, it should be understood that the embodiments described above are illustrative in all respects and not restrictive. For example, each component described as a single type may be implemented in a dispersed form, and likewise components described as distributed may be implemented in a combined form.
본 발명의 범위는 상기 상세한 설명보다는 후술하는 특허청구범위에 의하여 나타내어지며, 특허청구범위의 의미 및 범위 그리고 그 균등 개념으로부터 도출되는 모든 변경 또는 변형된 형태가 본 발명의 범위에 포함되는 것으로 해석되어야 한다.The scope of the present invention is indicated by the following claims rather than the above detailed description, and all changes or modifications derived from the meaning and scope of the claims and their equivalent concepts should be interpreted as being included in the scope of the present invention. do.

Claims (13)

  1. 서버시스템에서 수행되는 행동지표에 기반한 피평가자의 자동화된 평가방법으로서,As an automated evaluation method of an evaluator based on a behavioral indicator performed in a server system,
    상기 서버시스템에는 특정 역량에 대하여 복수의 행동지표 및 복수의 질문이 기설정되어 있고, 상기 복수의 행동지표 각각은 상기 복수의 질문 중 1 이상과 연관성을 갖는 것을 특징으로 하고,A plurality of behavior indicators and a plurality of questions are preset for a specific capability in the server system, each of the plurality of behavior indicators is characterized in that it has a correlation with at least one of the plurality of questions,
    상기 자동화된 평가방법은,The automated evaluation method is
    상기 특정 역량에 대한 평가를 수행하기 위한 기설정된 질문 중 1 이상을 피평가자에게 제공하는 제1질문제공단계 및 상기 제1질문제공단계에서 제공하는 1 이상의 질문에 대하여 상기 피평가자가 수행한 답변영상을 기계학습모델에 입력하여 상기 피평가자의 상기 특정 역량에 대한 평가정보 및 상기 평가정보와 관련된 도출행동지표를 포함하는 제1출력정보를 도출하는 제1출력정보도출단계를 포함하는 일반질문단계;The first question-providing step of providing one or more of the preset questions for performing the evaluation of the specific competency to the assessee, and the image of the answer performed by the assessee to the one or more questions provided in the first question-providing step are machined a general question step including a first output information derivation step of inputting into a learning model and deriving first output information including evaluation information for the specific competency of the evaluated person and a derived behavioral indicator related to the evaluation information;
    상기 일반질문단계가 1회 이상 수행된 후에, 도출된 1 이상의 상기 도출행동지표에 기초하여 1 이상의 심층질문을 설정하는 심층질문설정단계; 및an in-depth question setting step of setting one or more in-depth questions based on the one or more derived behavior indicators after the general question step is performed one or more times; and
    상기 심층질문에 대하여 상기 피평가자 수행한 답변영상 및 상기 제1출력정보도출단계에서 도출된 제1출력정보에 기초하여 상기 특정 역량에 대한 평가를 수행하는 역량평가단계;를 포함하는, 자동화된 평가방법.A competency evaluation step of performing an evaluation of the specific competency based on the image of the answer performed by the evaluator to the in-depth question and the first output information derived in the first output information derivation step; including, an automated evaluation method .
  2. 청구항 1에 있어서,The method according to claim 1,
    상기 역량평가단계는,The competency evaluation step is
    상기 심층질문설정단계에서 설정된 심층질문 중 1 이상을 상기 피평가자에게 제공하는 제2질문제공단계 및 상기 제2질문제공단계에서 제공된 1 이상의 심층질문에 대하여 상기 피평가자가 수행한 답변영상을 상기 기계학습모델에 입력하여 상기 피평가자의 상기 특정 역량에 대한 평가정보 및 상기 평가정보와 관련된 도출행동지표를 포함하는 제2출력정보를 도출하는 제2출력정보도출단계를 포함하는 심층질문단계; 및A second question providing step of providing one or more of the deep questions set in the deep question setting step to the evaluated, and an answer image performed by the evaluated by the person to be evaluated to one or more in-depth questions provided in the second question providing step are used as the machine learning model an in-depth question step including a second output information derivation step of inputting to and deriving second output information including evaluation information on the specific competency of the evaluated person and a derived behavioral indicator related to the evaluation information; and
    상기 제1출력정보 및 상기 제2출력정보에 기초하여 상기 피평가자의 상기 특정 역량에 대한 종합평가정보를 도출하는 종합평가정보도출단계;를 포함하는, 자동화된 평가방법.A comprehensive evaluation information derivation step of deriving comprehensive evaluation information on the specific competency of the evaluated person based on the first output information and the second output information;
  3. 청구항 1에 있어서,The method according to claim 1,
    상기 심층질문설정단계는,The in-depth question setting step is,
    상기 특정 역량에 대하여 설정된 복수의 행동지표 및 상기 일반질문단계를 통해 도출된 1 이상의 상기 도출행동지표에 기초하여 상기 복수의 행동지표 가운데 상기 도출행동지표로 도출되지 않은 행동지표를 판별하고, 상기 피평가자로 하여금 상기 도출행동지표로 도출되지 않은 행동지표와 관련된 답변을 이끌어내기 위한 1 이상의 심층질문을 설정하는, 자동화된 평가방법.Based on a plurality of behavioral indicators set for the specific competency and one or more derived behavioral indicators derived through the general question step, a behavioral indicator that is not derived from the derived behavioral indicator among the plurality of behavioral indicators is determined, and the evaluated person An automated evaluation method for setting one or more in-depth questions to elicit answers related to behavioral indicators that are not derived from the derived behavioral indicators.
  4. 청구항 1에 있어서,The method according to claim 1,
    상기 심층질문설정단계는,The in-depth question setting step is,
    상기 특정 역량에 대하여 설정된 복수의 행동지표 및 상기 일반질문단계를 통해 도출된 1 이상의 상기 도출행동지표에 기초하여 상기 복수의 행동지표 가운데 상기 도출행동지표로 도출되었으나, 기설정된 판별기준을 충족하지 못하는 행동지표를 미완성 행동지표로 판별하고, 상기 피평가자로 하여금 상기 미완성 행동지표와 관련된 답변을 이끌어내기 위한 1 이상의 심층질문을 설정하는, 자동화된 평가방법.Based on the plurality of behavioral indicators set for the specific competency and one or more derived behavioral indicators derived through the general question step, the derived behavioral indicators among the plurality of behavioral indicators were derived, but did not meet the preset discrimination criteria An automated evaluation method for discriminating a behavioral indicator as an incomplete behavioral indicator, and setting one or more in-depth questions to elicit an answer related to the incomplete behavioral indicator by the evaluator.
  5. 청구항 3에 있어서,4. The method according to claim 3,
    상기 심층질문설정단계는,The in-depth question setting step is,
    상기 제1출력정보도출단계에서 도출된 제1출력정보를 기계학습 기반의 심층질문추천모델에 입력하여 상기 피평가자로 하여금 상기 도출행동지표로 도출되지 않은 행동지표와 관련된 답변을 이끌어내기 위한 1 이상의 심층질문을 도출하는, 자동화된 평가방법.The first output information derived in the first output information derivation step is input to a machine learning-based deep question recommendation model to allow the evaluator to derive answers related to behavioral indicators that are not derived from the derived behavioral indicators. An automated evaluation method that elicits questions.
  6. 청구항 2에 있어서,3. The method according to claim 2,
    상기 제1출력정보도출단계 및 상기 제2출력정보도출단계는,The step of deriving the first output information and the step of deriving the second output information include:
    상기 피평가자가 수행한 답변영상에서 영상정보 및 음성정보를 분리하고, 분리된 영상정보 및 음성정보 각각을 전처리하여 상기 기계학습모델에 입력하는, 자동화된 평가방법.An automated evaluation method that separates image information and audio information from the answer image performed by the evaluated person, pre-processes each of the separated image information and audio information, and inputs the separated image information and audio information to the machine learning model.
  7. 청구항 2에 있어서,3. The method according to claim 2,
    상기 제1출력정보도출단계 및 상기 제2출력정보도출단계는,The step of deriving the first output information and the step of deriving the second output information include:
    상기 피평가자가 수행한 답변영상에 기초하여 텍스트정보를 도출하는 단계;deriving text information based on the answer image performed by the evaluator;
    상기 도출된 텍스트정보를 벡터로 표현하는 임베딩을 수행하는 단계; 및performing embedding expressing the derived text information as a vector; and
    상기 임베딩된 벡터를 상기 기계학습모델에 입력하는 단계;를 포함하는, 자동화된 평가방법.An automated evaluation method comprising; inputting the embedded vector into the machine learning model.
  8. 청구항 2에 있어서,3. The method according to claim 2,
    상기 제1출력정보도출단계에서 도출하는 제1출력정보 및 상기 제2출력정보도출단계에서 도출하는 제2출력정보는,The first output information derived in the first output information deriving step and the second output information derived in the second output information deriving step are,
    상기 평가정보와 관련된 도출행동지표에 대한 발견확률정보 및 상기 발견확률정보에 상응하는 상기 피평가자가 수행한 답변영상의 텍스트정보를 더 포함하는, 자동화된 평가방법.An automated evaluation method, further comprising: discovery probability information on the derived behavioral indicator related to the evaluation information; and text information of the answer image performed by the evaluated person corresponding to the discovery probability information.
  9. 청구항 8에 있어서,9. The method of claim 8,
    상기 종합평가정보도출단계에서 도출하는 종합평가정보는,The comprehensive evaluation information derived in the step of deriving the comprehensive evaluation information is,
    상기 제1출력정보도출단계 및 상기 제2출력정보도출단계에서 도출된 각각의 도출행동지표에 대한 발견확률정보를 종합하여 산출된 상기 특정 역량에 대한 스코어를 포함하는, 자동화된 평가방법.An automated evaluation method comprising a score for the specific competency calculated by synthesizing the discovery probability information for each derived behavioral indicator derived in the first output information derivation step and the second output information derivation step.
  10. 청구항 8에 있어서,9. The method of claim 8,
    상기 종합평가정보도출단계에서 도출하는 종합평가정보는,The comprehensive evaluation information derived in the step of deriving the comprehensive evaluation information is,
    상기 제1출력정보 및 상기 제2출력정보에 포함된 도출행동지표에 대한 발견확률정보, 텍스트정보, 해당 답변영상에 대한 기초스코어정보 및 상기 기계학습모델에서 상기 제1출력정보 및 상기 제2출력정보를 도출하기 위하여 생성된 특징정보 가운데 1 이상의 정보에 기초하여 도출된 상기 특정 역량에 대한 스코어를 포함하는, 자동화된 평가방법.The first output information and the second output from the discovery probability information, text information, the basic score information for the corresponding answer image, and the machine learning model for the derived behavior index included in the first output information and the second output information An automated evaluation method comprising a score for the specific competency derived based on one or more information among the characteristic information generated to derive information.
  11. 청구항 8에 있어서,9. The method of claim 8,
    상기 종합평가정보도출단계에서 도출하는 종합평가정보는,The comprehensive evaluation information derived in the step of deriving the comprehensive evaluation information is,
    상기 제1출력정보도출단계 및 상기 제2출력정보도출단계에서 입력하는 각각의 답변영상에 대해 전처리한 결과 정보를 종합하여 산출된 상기 특정 역량에 대한 스코어를 포함하는, 자동화된 평가방법.An automated evaluation method comprising a score for the specific competency calculated by synthesizing pre-processed result information for each answer image input in the first output information derivation step and the second output information derivation step.
  12. 행동지표에 기반한 피평가자의 자동화된 평가방법을 수행하는 서버시스템으로서,As a server system that performs an automated evaluation method of a subject based on behavioral indicators,
    상기 서버시스템에는 특정 역량에 대하여 복수의 행동지표 및 복수의 질문이 기설정되어 있고, 상기 복수의 행동지표 각각은 상기 복수의 질문 중 1 이상과 연관성을 갖는 것을 특징으로 하고,A plurality of behavior indicators and a plurality of questions are preset for a specific capability in the server system, each of the plurality of behavior indicators is characterized in that it has a correlation with at least one of the plurality of questions,
    상기 특정 역량에 대한 평가를 수행하기 위한 기설정된 질문 중 1 이상을 피평가자에게 제공하는 제1질문제공부 및 상기 제1질문제공단계에서 제공하는 1 이상의 질문에 대하여 상기 피평가자가 수행한 답변영상을 기계학습모델에 입력하여 상기 피평가자의 상기 특정 역량에 대한 평가정보 및 상기 평가정보와 관련된 도출행동지표를 포함하는 제1출력정보를 도출하는 제1출력정보도출부를 포함하는 일반질문부;The first question study in which one or more of the predetermined questions for performing the evaluation of the specific competency are provided to the assessee, and the image of the answer performed by the assessee to one or more questions provided in the first question-providing step a general inquiry unit including a first output information derivation unit for inputting into a learning model and deriving first output information including evaluation information for the specific competency of the evaluated person and a derived behavioral indicator related to the evaluation information;
    상기 일반질문부가 1회 이상 동작된 후에, 도출된 1 이상의 상기 도출행동지표에 기초하여 1 이상의 심층질문을 설정하는 심층질문설정부; 및an in-depth question setting unit configured to set one or more in-depth questions based on the derived one or more derived behavior indicators after the general question unit is operated one or more times; and
    상기 심층질문에 대하여 상기 피평가자 수행한 답변영상 및 상기 제1출력정보도출부에서 도출된 제1출력정보에 기초하여 상기 특정 역량에 대한 평가를 수행하는 역량평가부;를 포함하는, 서버시스템.Competence evaluation unit that evaluates the specific competency based on the first output information derived from the first output information derivation unit and the image of the answer performed by the evaluator to the in-depth question; Containing, a server system.
  13. 1 이상의 프로세서 및 1 이상의 메모리를 갖는 컴퓨팅장치에서 수행되는 행동지표에 기반한 피평가자의 자동화된 평가방법을 구현하기 위한, 컴퓨터-판독가능 매체로서,A computer-readable medium for implementing an automated evaluation method of an assessee based on a behavioral indicator performed in a computing device having one or more processors and one or more memories, comprising:
    상기 서버시스템에는 특정 역량에 대하여 복수의 행동지표 및 복수의 질문이 기설정되어 있고, 상기 복수의 행동지표 각각은 상기 복수의 질문 중 1 이상과 연관성을 갖는 것을 특징으로 하고,A plurality of behavior indicators and a plurality of questions are preset for a specific capability in the server system, each of the plurality of behavior indicators is characterized in that it has a correlation with at least one of the plurality of questions,
    상기 자동화된 평가방법은,The automated evaluation method is
    상기 특정 역량에 대한 평가를 수행하기 위한 기설정된 질문 중 1 이상을 피평가자에게 제공하는 제1질문제공단계 및 상기 제1질문제공단계에서 제공하는 1 이상의 질문에 대하여 상기 피평가자가 수행한 답변영상을 기계학습모델에 입력하여 상기 피평가자의 상기 특정 역량에 대한 평가정보 및 상기 평가정보와 관련된 도출행동지표를 포함하는 제1출력정보를 도출하는 제1출력정보도출단계를 포함하는 일반질문단계;The first question-providing step of providing one or more of the preset questions for performing the evaluation of the specific competency to the assessee, and the image of the answer performed by the assessee to the one or more questions provided in the first question-providing step are machined a general question step including a first output information derivation step of inputting into a learning model and deriving first output information including evaluation information on the specific competency of the evaluated person and a derived behavioral indicator related to the evaluation information;
    상기 일반질문단계가 1회 이상 수행된 후에, 도출된 1 이상의 상기 도출행동지표에 기초하여 1 이상의 심층질문을 설정하는 심층질문설정단계; 및an in-depth question setting step of setting one or more in-depth questions based on the one or more derived behavior indicators after the general question step is performed one or more times; and
    상기 심층질문에 대하여 상기 피평가자 수행한 답변영상 및 상기 제1출력정보도출단계에서 도출된 제1출력정보에 기초하여 상기 특정 역량에 대한 평가를 수행하는 역량평가단계;를 포함하는, 컴퓨터-판독가능 매체.Competence evaluation step of performing an evaluation of the specific competency based on the image of the answer performed by the evaluator to the in-depth question and the first output information derived in the first output information derivation step; Computer-readable including; media.
PCT/KR2021/008644 2020-07-10 2021-07-07 Method, system, and computer-readable medium for deriving in-depth questions for automated evaluation of interview video by using machine learning model WO2022010255A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020200085062A KR102475524B1 (en) 2020-07-10 2020-07-10 Methods, Systems and Computer-Readable Medium for Deriving In-Depth Questions for Automated Evaluation of Interview Videos using Machine Learning Model
KR10-2020-0085062 2020-07-10

Publications (1)

Publication Number Publication Date
WO2022010255A1 true WO2022010255A1 (en) 2022-01-13

Family

ID=79553328

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/008644 WO2022010255A1 (en) 2020-07-10 2021-07-07 Method, system, and computer-readable medium for deriving in-depth questions for automated evaluation of interview video by using machine learning model

Country Status (2)

Country Link
KR (1) KR102475524B1 (en)
WO (1) WO2022010255A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102630803B1 (en) * 2022-01-24 2024-01-29 주식회사 허니엠앤비 Emotion analysis result providing device and emotion analysis result providing system
KR102449661B1 (en) * 2022-06-27 2022-10-04 주식회사 레몬베이스 Method, apparatus and system of providing recruiting service based on artificial intelligence
CN117557426B (en) * 2023-12-08 2024-05-07 广州市小马知学技术有限公司 Work data feedback method and learning evaluation system based on intelligent question bank

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004309631A (en) * 2003-04-03 2004-11-04 Nippon Telegr & Teleph Corp <Ntt> Apparatus, method, and program for assisting interaction practice
KR20130055833A (en) * 2011-11-21 2013-05-29 배창수 Job interview brokerage system using terminal
JP2017219989A (en) * 2016-06-07 2017-12-14 株式会社採用と育成研究社 Online interview evaluation device, method and program
KR20190118140A (en) * 2018-04-09 2019-10-17 주식회사 마이다스아이티 Interview automation system using online talent analysis
KR20190140805A (en) * 2018-05-29 2019-12-20 주식회사 제네시스랩 Non-verbal Evaluation Method, System and Computer-readable Medium Based on Machine Learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004309631A (en) * 2003-04-03 2004-11-04 Nippon Telegr & Teleph Corp <Ntt> Apparatus, method, and program for assisting interaction practice
KR20130055833A (en) * 2011-11-21 2013-05-29 배창수 Job interview brokerage system using terminal
JP2017219989A (en) * 2016-06-07 2017-12-14 株式会社採用と育成研究社 Online interview evaluation device, method and program
KR20190118140A (en) * 2018-04-09 2019-10-17 주식회사 마이다스아이티 Interview automation system using online talent analysis
KR20190140805A (en) * 2018-05-29 2019-12-20 주식회사 제네시스랩 Non-verbal Evaluation Method, System and Computer-readable Medium Based on Machine Learning

Also Published As

Publication number Publication date
KR20220007193A (en) 2022-01-18
KR102475524B1 (en) 2022-12-08

Similar Documents

Publication Publication Date Title
WO2022010255A1 (en) Method, system, and computer-readable medium for deriving in-depth questions for automated evaluation of interview video by using machine learning model
WO2020190112A1 (en) Method, apparatus, device and medium for generating captioning information of multimedia data
WO2020138624A1 (en) Apparatus for noise canceling and method for the same
WO2020197241A1 (en) Device and method for compressing machine learning model
WO2020213750A1 (en) Artificial intelligence device for recognizing object, and method therefor
WO2020145571A2 (en) Method and system for managing automatic evaluation model for interview video, and computer-readable medium
WO2018143707A1 (en) Makeup evaluation system and operation method thereof
WO2019225961A1 (en) Electronic device for outputting response to speech input by using application and operation method thereof
WO2022065811A1 (en) Multimodal translation method, apparatus, electronic device and computer-readable storage medium
WO2014021567A1 (en) Method for providing message service, and device and system therefor
WO2019135621A1 (en) Video playback device and control method thereof
WO2021006404A1 (en) Artificial intelligence server
WO2020036297A1 (en) Electronic apparatus and controlling method thereof
WO2020213758A1 (en) Speech-interactive artificial intelligence device and method therefor
WO2020017827A1 (en) Electronic device and control method for electronic device
WO2021215804A1 (en) Device and method for providing interactive audience simulation
WO2022010240A1 (en) Scalp and hair management system
WO2020117006A1 (en) Ai-based face recognition system
WO2022154457A1 (en) Action localization method, device, electronic equipment, and computer-readable storage medium
WO2019203421A1 (en) Display device and display device control method
WO2020209693A1 (en) Electronic device for updating artificial intelligence model, server, and operation method therefor
WO2020184753A1 (en) Artificial intelligence apparatus for performing voice control by using voice extraction filter, and method therefor
WO2023080276A1 (en) Query-based database linkage distributed deep learning system, and method therefor
WO2022265127A1 (en) Artificial intelligence learning-based user churn rate prediction and user knowledge tracing system, and operation method thereof
WO2020251073A1 (en) Massage device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21837798

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21837798

Country of ref document: EP

Kind code of ref document: A1