WO2024024116A1 - Employee resignation prediction device, employee resignation prediction learning device, method, and program - Google Patents

Employee resignation prediction device, employee resignation prediction learning device, method, and program Download PDF

Info

Publication number
WO2024024116A1
WO2024024116A1 PCT/JP2022/029396 JP2022029396W WO2024024116A1 WO 2024024116 A1 WO2024024116 A1 WO 2024024116A1 JP 2022029396 W JP2022029396 W JP 2022029396W WO 2024024116 A1 WO2024024116 A1 WO 2024024116A1
Authority
WO
WIPO (PCT)
Prior art keywords
prediction
retirement
feature
incumbent
personnel
Prior art date
Application number
PCT/JP2022/029396
Other languages
French (fr)
Japanese (ja)
Inventor
哲哉 塩田
方邦 石井
央 倉沢
奏 山本
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to PCT/JP2022/029396 priority Critical patent/WO2024024116A1/en
Publication of WO2024024116A1 publication Critical patent/WO2024024116A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management

Definitions

  • the disclosed technology relates to a retirement prediction device, a retirement prediction learning device, a retirement prediction method, a retirement prediction learning method, and a program.
  • the disclosed technology was developed in view of the above points, and aims to predict retirement to support taking measures to prevent retirement at an appropriate time.
  • a first aspect of the present disclosure is a retirement prediction device, which includes a generation unit that generates a feature vector including a plurality of feature quantities from personnel-related information for a predetermined period of each of a current employee and a retiree; A predictive model that is trained using the feature vectors generated for each of the incumbents as positive examples and the feature vectors generated for each of the multiple incumbents as negative examples, and when the feature vectors for the incumbents are input, Using the prediction model that predicts the retirement probability indicating the possibility that the incumbent will retire at multiple points in the future, and the feature vector for the incumbent to be predicted, A prediction unit that predicts retirement probabilities at a plurality of points in time.
  • a second aspect of the present disclosure is a retirement prediction learning device, which includes a generation unit that generates a feature vector including a plurality of feature quantities from personnel-related information for a predetermined period of each of an incumbent and a retiree;
  • the feature vectors generated for each of the incumbents are taken as positive examples, and the feature vectors generated for each of the multiple incumbents are taken as negative examples. If the feature vectors for the incumbents are input, and a learning unit that learns a predictive model that predicts a retirement probability indicating the possibility that an incumbent will retire.
  • a third aspect of the present disclosure is a retirement prediction method, in which the generation unit generates a feature vector including a plurality of feature quantities from personnel-related information for a predetermined period of each of the incumbent and the retiree, and the prediction unit , a predictive model trained using feature vectors generated for each of a plurality of retirees as positive examples and feature vectors generated for each of a plurality of incumbents as a negative example, where the feature vectors for the incumbents are input.
  • the prediction model predicts the retirement probability indicating the possibility that the incumbent will retire at multiple points in the future when the incumbent is retired, and the feature vector for the incumbent The probability of retirement of a person at the plurality of future points is predicted.
  • a fourth aspect of the present disclosure is a retirement prediction learning method, in which the generation unit generates a feature vector including a plurality of feature quantities from personnel-related information for a predetermined period of each of the current employee and the retiree, and the learning unit
  • the feature vectors generated for each of multiple retirees are taken as positive examples, and the feature vectors generated for each of multiple incumbents are taken as negative examples, and when the feature vectors for incumbents are input, future future A predictive model is learned that predicts a retirement probability indicating the likelihood that the incumbent will retire at multiple points in time.
  • a fifth aspect of the present disclosure is a program that causes a computer to function as each component of the retirement prediction device or retirement prediction learning device described above.
  • FIG. 2 is a block diagram showing the hardware configuration of a retirement prediction processing device.
  • FIG. 2 is a block diagram showing an example of a functional configuration of a retirement prediction processing device.
  • FIG. 3 is a diagram for explaining personnel-related information.
  • FIG. 3 is a diagram for explaining the period of personnel-related information used for learning.
  • FIG. 3 is a diagram for explaining an example of generation of feature amounts.
  • FIG. 3 is a diagram for explaining the period of personnel-related information used for static items and statistical items. It is a figure showing an example of statistical processing of attendance data.
  • FIG. 2 is a diagram for explaining time-series prediction of retirement probability.
  • FIG. 3 is a diagram showing an example of a time-series prediction result of retirement probability.
  • FIG. 3 is a diagram illustrating an example of the degree of contribution for each feature amount. It is a figure which shows an example of a correspondence table.
  • FIG. 3 is a diagram for explaining the degree of similarity in contribution of feature amounts between an incumbent and a retired person.
  • FIG. 7 is a diagram illustrating an example of a list of calculation results of contribution degrees of feature amounts. It is a figure showing an example of a prediction result list.
  • 3 is a flowchart showing the flow of learning processing. It is a flowchart which shows the flow of prediction processing.
  • FIG. 2 is a diagram schematically showing learning data, prediction data, and prediction results. It is a figure which shows an example of the verification result of the prediction accuracy of the prediction model in this embodiment.
  • FIG. 3 is a diagram for explaining the degree of similarity of feature amounts between an incumbent and a retired person.
  • FIG. 1 is a block diagram showing the hardware configuration of a retirement prediction processing device 10 according to the present embodiment.
  • the retirement prediction processing device 10 includes a CPU (Central Processing Unit) 11, a ROM (Read Only Memory) 12, a RAM (Random Access Memory) 13, a storage 14, an input section 15, a display section 16, and communication It has an I/F (interface) 17.
  • Each configuration is communicably connected to each other via a bus 19.
  • the CPU 11 is a central processing unit that executes various programs and controls various parts. That is, the CPU 11 reads a program from the ROM 12 or the storage 14 and executes the program using the RAM 13 as a work area. The CPU 11 controls each of the above-mentioned components and performs various calculation processes according to programs stored in the ROM 12 or the storage 14. In this embodiment, the ROM 12 or the storage 14 stores a retirement prediction learning program for executing a learning process described later and a retirement prediction program for executing a prediction process described later.
  • the ROM 12 stores various programs and various data.
  • the RAM 13 temporarily stores programs or data as a work area.
  • the storage 14 is constituted by a storage device such as an HDD (Hard Disk Drive) or an SSD (Solid State Drive), and stores various programs including an operating system and various data.
  • the input unit 15 includes a pointing device such as a mouse and a keyboard, and is used to perform various inputs.
  • the display unit 16 is, for example, a liquid crystal display, and displays various information.
  • the display section 16 may employ a touch panel system and function as the input section 15.
  • the communication I/F 17 is an interface for communicating with other devices.
  • a wired communication standard such as Ethernet (registered trademark) or FDDI
  • a wireless communication standard such as 4G, 5G, or Wi-Fi (registered trademark) is used.
  • FIG. 2 is a block diagram showing an example of the functional configuration of the retirement prediction processing device 10.
  • the retirement prediction processing device 10 includes a retirement prediction learning section 20 and a retirement prediction section 40 as functional configurations.
  • the retirement prediction learning section 20 further includes a filter section 21, a generation section 22, and a learning section 23.
  • the retirement prediction unit 40 further includes a generation unit 41, a prediction unit 42, and an interpretation unit 43.
  • the retirement prediction learning section 20 is an example of a retirement prediction learning device of the present invention
  • the retirement prediction section 40 is an example of a retirement prediction device of the present invention.
  • Each functional configuration is realized by the CPU 11 reading out a retirement prediction learning program and a retirement prediction program stored in the ROM 12 or the storage 14, loading them onto the RAM 13, and executing them.
  • the retirement prediction learning unit 20 is a functional configuration that functions when learning a prediction model 31, which will be described later.
  • the filter unit 21 acquires personnel-related information of the incumbent from the incumbent DB 51 and acquires personnel-related information of the retiree from the retiree DB 52.
  • the personnel-related information includes information on data categories such as basic information, affiliation, salary, attendance, goal setting, and evaluation.
  • the information such as basic information, affiliation, salary, attendance, goal setting, and evaluation further includes information on each item as shown in FIG. 3, for example.
  • the example in FIG. 3 also shows "statistical processing presence/absence" indicating whether statistical processing is to be performed when generating feature amounts from the information of the items of each data category.
  • each of the current employee DB 51 and retiree DB 52 personnel-related information from a predetermined start point (for example, when joining the company) to the present time or the time of retirement is stored in association with a personal code that is the identification information of each of the current employee and retiree. Information is stored every predetermined period (eg, one month).
  • the retiree DB 52 also stores, for each retiree, the date of retirement and the reason for retirement obtained through an interview or the like at the time of retirement.
  • the retiree DB 52 also stores, for each retiree, a feature vector generated from personnel-related information by a generation unit 22, which will be described later.
  • the filter unit 21 extracts the personnel-related information for the period used to generate the feature vector for learning the prediction model 31 from the acquired personnel-related information for the entire period of each of the retirees and current employees. Specifically, for retirees, the filter unit 21 extracts personnel-related information prior to a first period before the time of retirement. In addition, for the current employee, the filter unit 21 extracts personnel-related information from a point in time that is a first period before the present time and a point in time that is a second period before the present time. A plurality of periods are set as this second period.
  • X months' worth of personnel-related information is also not used.
  • a plurality of arbitrary numbers greater than or equal to 0 are prepared for X, and allocated to each incumbent.
  • the allocation of X to incumbents may be random, or may be correlated to years of service, for example. In the latter case, if the number of years of service is short, set X to a small value, and if the number of years of service is long, set X to a large value to eliminate the loss caused by the period of N+X months being larger than the period from the month of joining to the latest month. It is possible to utilize as much personnel-related information as possible without causing any problems.
  • the filter unit 21 sets the first periods to N months, N+1 months, . . . , N+L months (L is an arbitrary integer) and extracts personnel-related information according to each first period.
  • the filter unit 21 passes the personnel-related information for the period extracted for each of retirees and current employees to the generation unit 22.
  • the generation unit 22 generates a feature vector including a plurality of feature quantities from the personnel-related information passed from the filter unit 21. For example, the generation unit 22 generates information about items whose values do not change during the period extracted by the filter unit 21 or items whose values change irregularly (hereinafter referred to as “static items”) among personnel-related information. generates feature amounts using item values as they are or by converting item values into categorical variables. For example, for items of personnel-related information whose values are numerical and whose values are changed regularly (hereinafter referred to as "statistical items"), the generation unit 22 statistically calculates the values for the most recent predetermined period. The feature amount is generated by processing.
  • the generation unit 22 generates data for each item included in the personnel-related information according to the data category to which the item belongs and the type of value (numeric value, categorical variable, text, etc.). Generate features by sequentially applying processing.
  • the generation unit 22 first removes abnormal values from the values of each item based on a predetermined rule.
  • the generation unit 22 executes a name matching process, for example, when a plurality of pieces of personnel-related information exist for the same person.
  • the generation unit 22 performs statistical processing (details will be described later) for numerical statistical items such as salary, attendance, goals, and evaluation.
  • the generation unit 22 standardizes items whose values are numerical values by converting them so that the values are in the range of 0 to 1.
  • items whose values are numerical values for example, people with the same employment category are grouped together, and then group standardization processing is executed to standardize the values on a group-by-group basis.
  • the generation unit 22 performs one-hot encoding on the items whose values are categorical variables.
  • the generation unit 22 separates the text items into words, and weights each word using an index (for example, TF-IDF) regarding the appearance rate of the word in all personnel-related information.
  • an index for example, TF-IDF
  • the most recent M months of the period extracted by the filter unit 21 are used. That is, for retirees, the amount is M months from N months from the month of retirement, and for current employees, it is M months from N+X months from the month of retirement. Note that items such as goals and evaluations are often changed, for example, every fiscal year or every half year, so the period for using personnel-related information is not limited to M months, and even if the entire period is used. good.
  • FIG. 7 shows an example of statistical processing of the item "total working hours" of attendance data.
  • the generation unit 22 performs statistical processing to obtain monthly minimum values, maximum values, totals, averages, medians, standard deviations, etc. for the total working hours for M months to generate feature quantities.
  • the generation unit 22 generates a feature vector whose elements are the plurality of feature amounts generated as described above for each of the incumbent and retired person. Furthermore, the generation unit 22 generates a feature vector for each piece of personnel-related information extracted by setting the first period to N months, N+1 months, . . . , N+L months (L is an arbitrary integer). The generation unit 22 passes the generated feature vector to the learning unit 23. Further, the generation unit 22 stores the retiree's feature vector in the retiree DB 52.
  • the learning unit 23 uses the feature vectors received from the generating unit 22 as learning data, and when the feature vectors about the incumbent are input, the learning unit 23 generates retirement information indicating the possibility of the incumbent retiring at multiple points in the future.
  • the learning unit 23 outputs the retirement prediction model 31N after N months when the feature vector of the positive example is inputted is close to 1, and when the feature vector of the negative example is inputted, the output of the retirement prediction model 31N after N months is input.
  • the parameters of the N-month retirement prediction model 31N are learned so that the output of 31N becomes close to 0.
  • the learning unit 23 similarly learns each of the N+1 month later retirement prediction models 31N+1, . .
  • the learning unit 23 stores the prediction model 31 composed of the N months later retirement prediction model 31N, the N+1 months later retirement prediction model 31N+1, . . . , the N+L months later retirement prediction model 31N+L in a predetermined storage area of the retirement prediction processing device 10. to be memorized.
  • the learning unit 23 generates an interpretation model 32 based on the prediction model 31, which calculates the degree of contribution of each feature included in the feature vector to the prediction result of the prediction model 31.
  • the learning unit 23 generates the interpretation model 32 using SHAP (SHApley Additive exPlanations).
  • SHAP SHApley Additive exPlanations
  • the retirement prediction unit 40 is a functional configuration that functions when predicting the probability of retirement and the reason for retirement for an incumbent to be predicted.
  • the generation unit 41 acquires the personnel-related information of the current employee to be predicted, which is input into the retirement prediction processing device 10.
  • the personnel-related information of the incumbent who is the prediction target is the same as the personnel-related information described in the retirement prediction learning section 20. Similar to the generation unit 22 of the retirement prediction learning unit 20, the generation unit 41 generates a feature vector from the personnel-related information of the incumbent who is the prediction target. At this time, the generation unit 41 uses personnel-related information for a period dating back from the latest month.
  • the prediction unit 42 predicts the retirement probability of the incumbent to be predicted at multiple points in the future by inputting the feature vector of the incumbent to be predicted to the prediction model 31. Specifically, as shown in the upper diagram of FIG. 8, the prediction unit 42 converts the feature vector of the incumbent to be predicted into the N months later retirement prediction model 31N and the N+1 months later retirement prediction model that constitute the prediction model 31. 31N+1, . . . , N+L months later retirement prediction model 31N+L. Thereby, the prediction unit 42 obtains the probability of retirement after N months, the probability of retirement after N+1 months, . . . the probability of retirement after N+L months.
  • the prediction unit 42 indicates a time-series change in the retirement probability from the retirement probability after N months, the retirement probability after N+1 months, ..., the retirement probability after N+L months.
  • the example in FIG. 9 also shows a threshold value for determining whether the retirement probability is high or low. By outputting such a graph, it can be determined that the probability of retirement has increased significantly in January and that some kind of retirement prevention measure is necessary before that happens.
  • the prediction unit 42 passes the feature vector of the incumbent to be predicted, the probability of retirement in N months, the probability of retirement in N+1 months, . . . the probability of retirement in N+L months to the interpretation unit 43.
  • the interpretation unit 43 uses the prediction model 31, the feature vector of the incumbent to be predicted, and the interpretation model 32 to calculate the degree of contribution of each feature included in the feature vector to the prediction result. If the contribution of all feature quantities is presented to a person in charge, such as a human resources department employee or a current manager, if the number of feature quantities is large, it will be difficult for the person in charge to check. Therefore, the interpretation unit 43 presents the feature quantities whose degree of contribution is greater than or equal to a predetermined value, or the predetermined number of feature quantities whose degree of contribution is higher than the predetermined value, as the basis on which the prediction model 31 has predicted the prediction result.
  • FIG. 10 shows an example of the degree of contribution for each feature amount.
  • the horizontal axis is the degree of contribution (0.0 to 1.0), and the numerical value shown in parentheses for each feature is the value of that feature.
  • the person in charge knows that the current employee to be predicted does not take special leave such as year-end and New Year vacation or summer vacation, and that the total number of working hours one month ago was relatively low and he/she tends to take days off. I can understand things, etc.
  • the person in charge knows that the incumbent who is the target of the prediction was engaged in a shift that started at 2:00 p.m., and from this, the person in charge makes a prediction that the incumbent is not good at mornings or has something to do in the morning. be able to.
  • the person in charge learned that the incumbent who was the target of the prediction was late twice three months ago, and that the total tardiness time was 272 minutes, which was extremely long.
  • the person in charge can understand that the current employee who is the target of prediction has been employed for a relatively long period of three years or more.
  • the person in charge can predict that the incumbent's total working hours are subject to large fluctuations, and that the incumbent has been late three times in the past month, and that the number of days off and tardiness is increasing. By comprehensively considering these factors, the person in charge can determine that the reason for the predicted retirement of the current employee to be predicted is "poor health.”
  • the interpretation unit 43 refers to the correspondence table 33 that stores combinations of feature quantities and feature quantity values in association with texts written in natural language in which the combinations are interpreted, and selects the combinations that have a high degree of contribution. Convert features, or the basis for prediction, into natural language.
  • FIG. 11 shows an example of the correspondence table 33.
  • the feature name, the value of the feature, the natural language for interpretation, and the retirement prediction reason are stored in association with each other.
  • the interpretation unit 43 acquires from the correspondence table 33 the natural language for interpretation and the retirement prediction reason corresponding to the combination of the highly contributing feature extracted using the interpretation model 32 and the value of the feature.
  • the interpretation unit 43 determines whether, from among the plurality of retirees, the degree of similarity between the degree of contribution of each feature quantity of the retiree and the degree of contribution of each feature quantity of the incumbent person to be predicted is equal to or greater than a predetermined value, or the degree of similarity A predetermined number of retirees are extracted from the top in descending order of their values. Specifically, the interpretation unit 43 acquires feature vectors of a plurality of retirees from the retiree DB 52, and uses the feature vectors of each retiree and the interpretation model 32 as shown in FIG. The contribution of each feature is calculated for each feature, and the calculation results are listed. FIG. 13 shows an example of a list of contribution calculation results.
  • the interpretation unit 43 creates a contribution vector whose elements are the contribution degrees for each characteristic value of the incumbent and the retirees to be predicted, and creates a contribution vector between the incumbent and the retirees to be predicted.
  • the Euclidean distance between the two is calculated as the degree of similarity.
  • the interpretation unit 43 sorts the retirees in descending order of similarity to the contribution vector of the incumbent to be predicted, that is, in descending order of the distance between the contribution vectors. A predetermined number of top retirees are extracted in descending order of degree.
  • the interpretation unit 43 acquires the reason for retirement stored in the retiree DB 52 for the extracted retiree. Furthermore, the interpretation unit 43 identifies the feature with the highest degree of contribution for the extracted retiree from the list as shown in FIG. You may also obtain it from This is based on the idea that features with high contributions for retirees whose feature values are similar to those of incumbents to be predicted are likely to have an influence on the reason for retirement. In the example of FIG. 13, the interpretation unit 43 acquires the reason for retirement corresponding to the feature quantity 1 of Mr. Y whose feature quantity has the highest degree of contribution to the current incumbent who is the prediction target.
  • the interpretation unit 43 creates a prediction result list that describes each piece of information interpreted as described above.
  • FIG. 14 shows an example of the prediction result list.
  • FIG. 14 is an example of a list of prediction results for incumbents whose probability of retirement after N months predicted by the prediction unit 42 is equal to or greater than a predetermined value.
  • the prediction result list in Figure 14 includes information such as ⁇ Probability of retiring after N months'', ⁇ Retirement prediction reasons'', ⁇ Top K prediction grounds'', and ⁇ Retirement probability in N months'', and ⁇ Top K prediction grounds'', in association with the ⁇ Personal code'' of the relevant incumbent. Includes "past retirees.”
  • the "probability of retirement after N months” is the probability of retirement after N months, which is passed from the prediction unit 42.
  • “Retirement prediction reason” is a retirement prediction reason obtained from the correspondence table 33 based on a combination of a feature amount with a high degree of contribution and a value of the feature amount.
  • “Past retirees with similar tendencies” is the personal code of the retiree and the reason for retirement of the retiree, which are extracted based on the similarity of the degree of contribution of the feature amount. Note that in the example of FIG. 14, only information on one retiree is described, but information on a plurality of retirees may be recorded. By creating a list of prediction results and interpreted information in this way, it is possible to filter incumbents based on their probability of leaving or reason for leaving, and quickly search for incumbents who require priority attention.
  • the interpretation unit 43 further similarly creates prediction result lists for N+1 months later, . . . , N+L months later. This allows incumbents who need to take immediate action to check the list of predicted results for N months from now, while incumbents who need to take action over the long term can check the list of predicted results for N+L months. can do.
  • the interpretation unit 43 outputs the created prediction result list. Furthermore, the interpretation unit 43 may also output a graph showing the time series of the retirement probabilities described above. In this way, by presenting the basis for the prediction in an easy-to-interpret format along with the probability of retirement at multiple points in the future, it is possible to understand why the current employee being predicted is considering retiring, and to make appropriate decisions. Make it possible to take measures to prevent retirement.
  • FIG. 15 is a flowchart showing the flow of learning processing by the retirement prediction processing device 10.
  • the learning process is performed by the CPU 11 reading out the retirement prediction learning program from the ROM 12 or the storage 14, loading it onto the RAM 13, and executing it. Note that the learning process is an example of the retirement prediction learning method of the present invention.
  • step S ⁇ b>11 the CPU 11 , as the filter unit 21 , acquires the personnel-related information of the incumbent from the incumbent DB 51 and acquires the personnel-related information of the retiree from the retiree DB 52 .
  • step S12 the CPU 11, as the filter unit 21, extracts, for the retiree, personnel-related information up to the first period before the time of retirement.
  • the filter unit 21 extracts personnel-related information from a point in time that is a first period before the present time and a point in time that is a second period before the present time.
  • step S13 the CPU 11, as the generation unit 22, performs various processing such as statistical processing and conversion into categorical variables for each item of personnel-related information of the current employees and retirees extracted in step S12. to generate a feature vector containing multiple feature quantities.
  • step S14 the CPU 11, as the learning unit 23, learns the prediction model 31 using the retiree's feature vector generated in step S13 as a positive example and the incumbent's feature vector as a negative example.
  • step S15 the CPU 11, as the learning unit 23, uses the interpretation model 32 to calculate the degree of contribution of each feature included in the feature vector to the prediction result of the prediction model 31, based on the learned prediction model 31. is generated, and the learning process ends.
  • FIG. 16 is a flowchart showing the flow of prediction processing by the retirement prediction processing device 10.
  • the prediction process is performed by the CPU 11 reading out the retirement prediction program from the ROM 12 or the storage 14, loading it into the RAM 13, and executing it. Note that the prediction process is an example of the retirement prediction method of the present invention.
  • step S21 the CPU 11, as the generation unit 41, acquires the personnel-related information of the current employee to be predicted, which is input to the retirement prediction processing device 10.
  • step S22 the CPU 11, as the generation unit 41, generates a feature vector from the personnel-related information of the incumbent who is the prediction target.
  • step S23 the CPU 11, as the prediction unit 42, predicts the retirement probability of the incumbent to be predicted at multiple points in the future by inputting the feature vector of the incumbent to be predicted to the prediction model 31. .
  • step S24 the CPU 11, as the interpretation unit 43, uses the prediction model 31, the feature vectors of each of the current and retired persons to be predicted, and the interpretation model 32 to calculate each feature included in the feature vector. Calculate the degree of contribution to the predicted amount.
  • step S25 the CPU 11, as the interpreter 43, refers to the correspondence table 33 and selects a combination of a feature with a high degree of contribution and a value of that feature among the features of the incumbent to be predicted. Obtain the corresponding interpretation natural language and retirement prediction reasons.
  • step S26 the CPU 11, as the interpreter 43, determines the degree of similarity between the degree of contribution of each feature of the retiree and the degree of contribution of each feature of the incumbent to be predicted, from among the plurality of retirees. is greater than a predetermined value, or a predetermined number of top retirees are extracted in descending order of similarity.
  • step S27 the CPU 11, as the interpreter 43, acquires the reason for retirement stored in the retiree DB 52 for the extracted retiree.
  • step S28 the CPU 11, as the interpreter 43, creates a prediction result list including the retirement probabilities of incumbents whose retirement probabilities after the first period are greater than or equal to a predetermined value in step S23. Further, the CPU 11, as the interpreter 43, uses the natural language indicating the basis for prediction and the retirement prediction reason acquired in step S25, the personal code of the retiree extracted in step S26, and the retirement of the retiree acquired in step S27. Include the reason in the prediction results list. The CPU 11, as the interpreter 43, outputs the created prediction result list, and the prediction process ends.
  • the retirement prediction unit of the retirement prediction processing device generates a feature vector including a plurality of feature quantities from the personnel-related information for a predetermined period of each of the current employee and the retiree, and This is a predictive model that is trained using the feature vectors generated for each of the retirees as positive examples and the feature vectors generated for each of the multiple incumbents as negative examples, and the feature vectors for the incumbents are input.
  • a prediction model that predicts the retirement probability indicating the likelihood of an incumbent retiring at multiple points in the future, and a feature vector about the incumbent to be predicted, are used to Predict the probability of retirement at the point in time. This makes it possible to predict retirement in order to support taking measures to prevent retirement at an appropriate time.
  • the retirement prediction unit of the retirement prediction processing device may be predicted that Furthermore, as shown in B of FIG. 17, predictions of retirement probabilities and reasons for retirement are also performed for incumbents who were not employed at the time of learning.
  • the retirement prediction unit may output the retirement probabilities at multiple points in the future as a graph showing time-series changes in the retirement probabilities. This makes it possible to appropriately judge when to take measures to prevent retirement.
  • the retirement prediction unit uses a prediction model, a feature vector of the incumbent to be predicted, and an interpretation model that calculates the degree of contribution of each feature to the prediction result of the prediction model.
  • the feature amounts or the predetermined top feature amounts in descending order of contribution may be presented as the basis for the prediction model predicting the prediction result. This makes it possible to understand the basis for predicting the probability of retirement, and to consider appropriate measures to prevent retirement.
  • the retirement prediction unit selects from among the plurality of retirees, the degree of similarity between the feature vector of the retiree and the feature vector of the incumbent to be predicted is a predetermined value or more, or retirees, or the degree of similarity between the degree of contribution of the feature amount of the retiree and the degree of contribution of the feature amount of the incumbent person to be predicted is greater than or equal to a predetermined value, or a predetermined number of retirees in the top order of similarity. may be extracted and presented. This makes it possible to consider appropriate measures to prevent retirement by referring to information on retirees who have similar trends to the current employee who is the target of prediction. In particular, when using the similarity of the contribution of features, it is assumed that the features that were the reason for resignation are similar, so it is assumed that the reason for retirement is appropriately assumed and more appropriate retirement is achieved. Prevention measures can be considered.
  • the retirement prediction unit may present the reason for retirement stored in advance for the extracted retiree. This makes it possible to appropriately assume the reasons for retirement and consider more appropriate retirement prevention measures.
  • the retirement prediction unit refers to a correspondence table in which combinations of feature quantities and reference values of feature quantity values are associated with and stored text in natural language that interprets the combinations, and the feature quantities to be presented are determined in a natural manner. It may be converted into a language and presented. As a result, even if personnel such as human resources personnel or managers are not familiar with machine learning, they can easily understand the basis of predictions made by the predictive model.
  • the retirement prediction unit corresponds to the presented feature by referring to a correspondence table in which combinations of feature values and reference values of feature values and retirement prediction reasons represented by the combinations are stored in association with each other. You may also present reasons for predicting retirement. Thereby, it is possible to easily understand the reason for the target prediction according to the feature quantity that is the basis of the prediction.
  • the retirement prediction department will leave the value of the feature as is or categorize it.
  • feature vectors may be generated by statistically processing the values for the most recent predetermined period.
  • the retirement prediction learning unit of the retirement prediction processing device generates a feature vector including a plurality of feature quantities from the personnel-related information for a predetermined period of time for each of the current employees and retirees, and
  • the feature vectors generated for each of the incumbents are taken as positive examples, and the feature vectors generated for each of the multiple incumbents are taken as negative examples.
  • the incumbents at multiple future points in time are Learn a predictive model that predicts the probability of retirement. This makes it possible to generate a prediction model that can predict retirement to support taking measures to prevent retirement at an appropriate time.
  • the retirement prediction learning unit uses personnel-related information to be used for generating feature quantities for learning a prediction model that predicts the probability of retirement after a first period from the present time. Extract personnel-related information prior to a time period, and for the current employee, go back from the current time and extract personnel-related information prior to a second period prior to the first period.
  • a feature vector may be generated by setting a plurality of periods as the second period and using the extracted personnel-related information as personnel-related information for a predetermined period. Thereby, the same feature amount is generated based on the values of items common to all incumbents, and it is possible to prevent this from becoming a leak feature amount.
  • FIG. 18 shows an example of the verification results of the prediction accuracy of the prediction model in this embodiment.
  • FIG. 18 shows the recall rate and precision rate for each employment category of incumbents as verification results for a prediction model that predicts the probability of retirement six months later.
  • a to H in FIG. 18 mask data representing employment categories such as part-time workers and full-time employees. Although there are differences by employment category, high recall rates are obtained for specific employment categories. It can be seen that even in employment category G, which has the lowest recall rate, approximately one in four incumbents can be found showing signs of retiring.
  • N is an integer larger than 0
  • N is an integer larger than 0
  • the retirement prediction learning section and the retirement prediction section are realized by one computer (retirement prediction processing device) has been described, but they may be realized by different computers.
  • various processors other than the CPU may execute the retirement prediction learning process and the retirement prediction process that the CPU reads and executes the software (program) in the above embodiments.
  • the processor in this case is a PLD (Programmable Logic Device) whose circuit configuration can be changed after manufacturing, such as an FPGA (Field-Programmable Gate Array), and an ASIC (Application Specific Intel).
  • An example is a dedicated electric circuit that is a processor having a specially designed circuit configuration.
  • the retirement prediction learning process and the retirement prediction process may be executed by one of these various processors, or by a combination of two or more processors of the same type or different types (for example, multiple FPGAs and CPUs). and FPGA).
  • the hardware structure of these various processors is, more specifically, an electric circuit that is a combination of circuit elements such as semiconductor elements.
  • the retirement prediction learning program and the retirement prediction program are stored (installed) in the ROM 12 or the storage 14 in advance, but the present invention is not limited to this.
  • the program can be installed on CD-ROM (Compact Disk Read Only Memory), DVD-ROM (Digital Versatile Disk Read Only Memory), and USB (Universal Serial Bus) stored in a non-transitory storage medium such as memory It may be provided in the form of Further, the program may be downloaded from an external device via a network.
  • the processor includes: Generate a feature vector including multiple feature quantities from personnel-related information for a predetermined period of time for each of current and retired employees, A predictive model that is trained using feature vectors generated for each of a plurality of retirees as a positive example and feature vectors generated for each of a plurality of incumbents as a negative example, and the feature vector for the incumbent is input.
  • a retirement prediction device configured to predict retirement probabilities at the plurality of future points in time.
  • the processor includes: Generate a feature vector including multiple feature quantities from personnel-related information for a predetermined period of time for each of current and retired employees, The feature vectors generated for each of multiple retirees are taken as positive examples, and the feature vectors generated for each of multiple incumbents are taken as negative examples.
  • future multiple A retirement prediction learning device configured to learn a prediction model that predicts a retirement probability indicating the possibility that the current employee will retire at a point in time.
  • the retirement prediction process includes: Generate a feature vector including multiple feature quantities from personnel-related information for a predetermined period of time for each of current and retired employees, A predictive model that is trained using feature vectors generated for each of a plurality of retirees as a positive example and feature vectors generated for each of a plurality of incumbents as a negative example, and the feature vector for the incumbent is input.
  • the prediction model predicts the retirement probability indicating the possibility that the incumbent will retire at multiple points in the future, and the feature vector for the prediction target incumbent is used to predict the prediction target incumbent. predicting retirement probabilities at said plurality of future points in time.
  • a non-temporary recording medium storing a program executable by a computer to execute a retirement prediction learning process
  • the retirement prediction learning process includes: Generate a feature vector including multiple feature quantities from personnel-related information for a predetermined period of time for each of current and retired employees, The feature vectors generated for each of multiple retirees are taken as positive examples, and the feature vectors generated for each of multiple incumbents are taken as negative examples.
  • future multiple A non-temporary recording medium comprising learning a predictive model that predicts a retirement probability indicating the possibility that the incumbent will retire at a point in time.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Operations Research (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The purpose of a disclosed technology is to make employee resignation prediction for assisting in implementing an employee resignation prevention measure at an appropriate timing. A generation unit (41) generates a feature vector including a plurality of feature quantities from personnel related information sets in a prescribed period of time for each active employee and each resigned employee. A prediction unit (42) predicts an employee resignation probability of an active employee subjected to prediction resigning at each of a plurality of time points in the future by using a feature vector of the active employee subjected to prediction and a prediction model 31 that is obtained by performing learning of feature vectors generated for a plurality of resigned employees as positive examples and feature vectors generated for a plurality of active employees as negative examples and that provides a prediction of, when the feature vector of the active employee is inputted, an employee resignation probability indicating the possibility of the active employee resigning at each of the plurality of time points in the future.

Description

退職予測装置、退職予測学習装置、方法、及びプログラムRetirement prediction device, retirement prediction learning device, method, and program
 開示の技術は、退職予測装置、退職予測学習装置、退職予測方法、退職予測学習方法、及びプログラムに関する。 The disclosed technology relates to a retirement prediction device, a retirement prediction learning device, a retirement prediction method, a retirement prediction learning method, and a program.
 労働人口の減少に伴い、有効求人倍率は増加傾向にあり、それに伴い採用コストも増加傾向にある。多様化する社会では業務の専門性の高まりもあり、無闇に採用を増やすのではなく、教育コストをかけて社員のスキルを向上させ、会社でパフォーマンスを発揮してもらう方がコストメリットが高くなる。しかし、教育コストをかけた社員が短期間で退職してしまうことは会社の損害となってしまう。 With the decline in the working population, the effective job openings-to-applicants ratio is on the rise, and recruitment costs are also on the rise. In a diversifying society, work is becoming more specialized, and it is more cost-effective to spend training costs to improve the skills of employees and have them demonstrate their performance in the company, rather than hiring more people blindly. . However, if an employee who has spent training costs resigns in a short period of time, it will be a loss to the company.
 そこで、経験豊富な従業員の退職を防ぐために、従業員の退職有無を予測する方法が提案されている。この方法は、従業員の評価や満足度と退職率との関係性を用いて退職予測を実施する Therefore, in order to prevent experienced employees from resigning, methods have been proposed to predict whether or not employees will resign. This method predicts retirement using the relationship between employee evaluation and satisfaction level and turnover rate.
 社員が退職希望を会社に伝えるタイミングでは、既に引き止めが難しい傾向がある。したがって、社員に会社に残ってもらうためには、社員が退職を考え始めたタイミング、すなわち、社員が退職する数ヶ月前には適切な退職防止策を打つことが求められる。 It tends to be difficult to retain employees by the time they notify the company of their desire to retire. Therefore, in order to have employees stay with the company, it is necessary to take appropriate measures to prevent employees from quitting at the timing when they start thinking about resigning, that is, several months before they retire.
 しかし、従来技術では、将来的な退職の有無を予測しているものの、どのタイミングで退職の可能性が高まるかまでは予測していない。そのため、社員に退職防止策のアプローチをする適切なタイミングを逸して結果的に退職につながる、という可能性が大いに有り得る。 However, although conventional technology predicts whether or not a person will retire in the future, it does not predict when the possibility of retirement will increase. Therefore, there is a high possibility that the appropriate timing to approach employees to prevent them from quitting may be missed, resulting in them resigning.
 開示の技術は、上記の点に鑑みてなされたものであり、適切なタイミングで退職防止策を打つことを支援するための退職予測を行うことを目的とする。 The disclosed technology was developed in view of the above points, and aims to predict retirement to support taking measures to prevent retirement at an appropriate time.
 本開示の第1態様は、退職予測装置であって、現職者及び退職者の各々の所定期間分の人事関連情報から複数の特徴量を含む特徴ベクトルを生成する生成部と、複数の退職者の各々について生成された特徴ベクトルを正例、複数の現職者の各々について生成された特徴ベクトルを負例として学習された予測モデルであって、現職者についての特徴ベクトルが入力された場合に、将来の複数の時点における前記現職者が退職する可能性を示す退職確率を予測する前記予測モデルと、予測対象の現職者についての特徴ベクトルとを用いて、前記予測対象の現職者の前記将来の複数の時点における退職確率を予測する予測部と、を含む。 A first aspect of the present disclosure is a retirement prediction device, which includes a generation unit that generates a feature vector including a plurality of feature quantities from personnel-related information for a predetermined period of each of a current employee and a retiree; A predictive model that is trained using the feature vectors generated for each of the incumbents as positive examples and the feature vectors generated for each of the multiple incumbents as negative examples, and when the feature vectors for the incumbents are input, Using the prediction model that predicts the retirement probability indicating the possibility that the incumbent will retire at multiple points in the future, and the feature vector for the incumbent to be predicted, A prediction unit that predicts retirement probabilities at a plurality of points in time.
 本開示の第2態様は、退職予測学習装置であって、現職者及び退職者の各々の所定期間分の人事関連情報から複数の特徴量を含む特徴ベクトルを生成する生成部と、複数の退職者の各々について生成された特徴ベクトルを正例、複数の現職者の各々について生成された特徴ベクトルを負例として、現職者についての特徴ベクトルが入力された場合に、将来の複数の時点における前記現職者が退職する可能性を示す退職確率を予測する予測モデルを学習する学習部と、を含む。 A second aspect of the present disclosure is a retirement prediction learning device, which includes a generation unit that generates a feature vector including a plurality of feature quantities from personnel-related information for a predetermined period of each of an incumbent and a retiree; The feature vectors generated for each of the incumbents are taken as positive examples, and the feature vectors generated for each of the multiple incumbents are taken as negative examples.If the feature vectors for the incumbents are input, and a learning unit that learns a predictive model that predicts a retirement probability indicating the possibility that an incumbent will retire.
 本開示の第3態様は、退職予測方法であって、生成部が、現職者及び退職者の各々の所定期間分の人事関連情報から複数の特徴量を含む特徴ベクトルを生成し、予測部が、複数の退職者の各々について生成された特徴ベクトルを正例、複数の現職者の各々について生成された特徴ベクトルを負例として学習された予測モデルであって、現職者についての特徴ベクトルが入力された場合に、将来の複数の時点における前記現職者が退職する可能性を示す退職確率を予測する前記予測モデルと、予測対象の現職者についての特徴ベクトルとを用いて、前記予測対象の現職者の前記将来の複数の時点における退職確率を予測する。 A third aspect of the present disclosure is a retirement prediction method, in which the generation unit generates a feature vector including a plurality of feature quantities from personnel-related information for a predetermined period of each of the incumbent and the retiree, and the prediction unit , a predictive model trained using feature vectors generated for each of a plurality of retirees as positive examples and feature vectors generated for each of a plurality of incumbents as a negative example, where the feature vectors for the incumbents are input. The prediction model predicts the retirement probability indicating the possibility that the incumbent will retire at multiple points in the future when the incumbent is retired, and the feature vector for the incumbent The probability of retirement of a person at the plurality of future points is predicted.
 本開示の第4態様は、退職予測学習方法であって、生成部が、現職者及び退職者の各々の所定期間分の人事関連情報から複数の特徴量を含む特徴ベクトルを生成し、学習部が、複数の退職者の各々について生成された特徴ベクトルを正例、複数の現職者の各々について生成された特徴ベクトルを負例として、現職者についての特徴ベクトルが入力された場合に、将来の複数の時点における前記現職者が退職する可能性を示す退職確率を予測する予測モデルを学習する。 A fourth aspect of the present disclosure is a retirement prediction learning method, in which the generation unit generates a feature vector including a plurality of feature quantities from personnel-related information for a predetermined period of each of the current employee and the retiree, and the learning unit The feature vectors generated for each of multiple retirees are taken as positive examples, and the feature vectors generated for each of multiple incumbents are taken as negative examples, and when the feature vectors for incumbents are input, future future A predictive model is learned that predicts a retirement probability indicating the likelihood that the incumbent will retire at multiple points in time.
 本開示の第5態様は、コンピュータを、上記の退職予測装置又は退職予測学習装置を構成する各部として機能させるプログラムである。 A fifth aspect of the present disclosure is a program that causes a computer to function as each component of the retirement prediction device or retirement prediction learning device described above.
 開示の技術によれば、適切なタイミングで退職防止策を打つことを支援するための退職予測を行うことができる。 According to the disclosed technology, it is possible to predict retirement to support taking measures to prevent retirement at an appropriate time.
退職予測処理装置のハードウェア構成を示すブロック図である。FIG. 2 is a block diagram showing the hardware configuration of a retirement prediction processing device. 退職予測処理装置の機能構成の例を示すブロック図である。FIG. 2 is a block diagram showing an example of a functional configuration of a retirement prediction processing device. 人事関連情報を説明するための図である。FIG. 3 is a diagram for explaining personnel-related information. 学習に使用する人事関連情報の期間を説明するための図である。FIG. 3 is a diagram for explaining the period of personnel-related information used for learning. 特徴量の生成の一例を説明するための図である。FIG. 3 is a diagram for explaining an example of generation of feature amounts. 静的項目及び統計的項目について使用する人事関連情報の期間を説明するための図である。FIG. 3 is a diagram for explaining the period of personnel-related information used for static items and statistical items. 勤怠データの統計処理の一例を示す図である。It is a figure showing an example of statistical processing of attendance data. 退職確率の時系列の予測を説明するための図である。FIG. 2 is a diagram for explaining time-series prediction of retirement probability. 退職確率の時系列の予測結果の一例を示す図である。FIG. 3 is a diagram showing an example of a time-series prediction result of retirement probability. 特徴量毎の貢献度の一例を示す図である。FIG. 3 is a diagram illustrating an example of the degree of contribution for each feature amount. 対応表の一例を示す図である。It is a figure which shows an example of a correspondence table. 現職者と退職者との特徴量の貢献度の類似度を説明するための図である。FIG. 3 is a diagram for explaining the degree of similarity in contribution of feature amounts between an incumbent and a retired person. 特徴量の貢献度の算出結果のリストの一例を示す図である。FIG. 7 is a diagram illustrating an example of a list of calculation results of contribution degrees of feature amounts. 予測結果リストの一例を示す図である。It is a figure showing an example of a prediction result list. 学習処理の流れを示すフローチャートである。3 is a flowchart showing the flow of learning processing. 予測処理の流れを示すフローチャートである。It is a flowchart which shows the flow of prediction processing. 学習データ、予測データ、及び予測結果の概略を示す図である。FIG. 2 is a diagram schematically showing learning data, prediction data, and prediction results. 本実施形態における予測モデルの予測精度の検証結果の一例を示す図である。It is a figure which shows an example of the verification result of the prediction accuracy of the prediction model in this embodiment. 現職者と退職者との特徴量の類似度を説明するための図である。FIG. 3 is a diagram for explaining the degree of similarity of feature amounts between an incumbent and a retired person.
 以下、開示の技術の実施形態の一例を、図面を参照しつつ説明する。なお、各図面において同一又は等価な構成要素及び部分には同一の参照符号を付与している。また、図面の寸法比率は、説明の都合上誇張されており、実際の比率とは異なる場合がある。 Hereinafter, an example of an embodiment of the disclosed technology will be described with reference to the drawings. In addition, the same reference numerals are given to the same or equivalent components and parts in each drawing. Furthermore, the dimensional ratios in the drawings are exaggerated for convenience of explanation and may differ from the actual ratios.
 図1は、本実施形態に係る退職予測処理装置10のハードウェア構成を示すブロック図である。図1に示すように、退職予測処理装置10は、CPU(Central Processing Unit)11、ROM(Read Only Memory)12、RAM(Random Access Memory)13、ストレージ14、入力部15、表示部16及び通信I/F(インタフェース)17を有する。各構成は、バス19を介して相互に通信可能に接続されている。 FIG. 1 is a block diagram showing the hardware configuration of a retirement prediction processing device 10 according to the present embodiment. As shown in FIG. 1, the retirement prediction processing device 10 includes a CPU (Central Processing Unit) 11, a ROM (Read Only Memory) 12, a RAM (Random Access Memory) 13, a storage 14, an input section 15, a display section 16, and communication It has an I/F (interface) 17. Each configuration is communicably connected to each other via a bus 19.
 CPU11は、中央演算処理ユニットであり、各種プログラムを実行したり、各部を制御したりする。すなわち、CPU11は、ROM12又はストレージ14からプログラムを読み出し、RAM13を作業領域としてプログラムを実行する。CPU11は、ROM12又はストレージ14に記憶されているプログラムに従って、上記各構成の制御及び各種の演算処理を行う。本実施形態では、ROM12又はストレージ14には、後述する学習処理を実行するための退職予測学習プログラム、及び後述する予測処理を実行するための退職予測プログラムが格納されている。 The CPU 11 is a central processing unit that executes various programs and controls various parts. That is, the CPU 11 reads a program from the ROM 12 or the storage 14 and executes the program using the RAM 13 as a work area. The CPU 11 controls each of the above-mentioned components and performs various calculation processes according to programs stored in the ROM 12 or the storage 14. In this embodiment, the ROM 12 or the storage 14 stores a retirement prediction learning program for executing a learning process described later and a retirement prediction program for executing a prediction process described later.
 ROM12は、各種プログラム及び各種データを格納する。RAM13は、作業領域として一時的にプログラム又はデータを記憶する。ストレージ14は、HDD(Hard Disk Drive)、SSD(Solid State Drive)等の記憶装置により構成され、オペレーティングシステムを含む各種プログラム、及び各種データを格納する。 The ROM 12 stores various programs and various data. The RAM 13 temporarily stores programs or data as a work area. The storage 14 is constituted by a storage device such as an HDD (Hard Disk Drive) or an SSD (Solid State Drive), and stores various programs including an operating system and various data.
 入力部15は、マウス等のポインティングデバイス、及びキーボードを含み、各種の入力を行うために使用される。表示部16は、例えば、液晶ディスプレイであり、各種の情報を表示する。表示部16は、タッチパネル方式を採用して、入力部15として機能してもよい。 The input unit 15 includes a pointing device such as a mouse and a keyboard, and is used to perform various inputs. The display unit 16 is, for example, a liquid crystal display, and displays various information. The display section 16 may employ a touch panel system and function as the input section 15.
 通信I/F17は、他の機器と通信するためのインタフェースである。当該通信には、例えば、イーサネット(登録商標)若しくはFDDI等の有線通信の規格、又は、4G、5G、若しくはWi-Fi(登録商標)等の無線通信の規格が用いられる。 The communication I/F 17 is an interface for communicating with other devices. For this communication, for example, a wired communication standard such as Ethernet (registered trademark) or FDDI, or a wireless communication standard such as 4G, 5G, or Wi-Fi (registered trademark) is used.
 次に、退職予測処理装置10の機能構成について説明する。図2は、退職予測処理装置10の機能構成の例を示すブロック図である。図2に示すように、退職予測処理装置10は、機能構成として、退職予測学習部20と、退職予測部40とを含む。退職予測学習部20は、さらに、フィルタ部21と、生成部22と、学習部23とを含む。退職予測部40は、さらに、生成部41と、予測部42と、解釈部43とを含む。なお、退職予測学習部20は、本発明の退職予測学習装置の一例であり、退職予測部40は、本発明の退職予測装置の一例である。各機能構成は、CPU11がROM12又はストレージ14に記憶された退職予測学習プログラム及び退職予測プログラムを読み出し、RAM13に展開して実行することにより実現される。 Next, the functional configuration of the retirement prediction processing device 10 will be explained. FIG. 2 is a block diagram showing an example of the functional configuration of the retirement prediction processing device 10. As shown in FIG. 2, the retirement prediction processing device 10 includes a retirement prediction learning section 20 and a retirement prediction section 40 as functional configurations. The retirement prediction learning section 20 further includes a filter section 21, a generation section 22, and a learning section 23. The retirement prediction unit 40 further includes a generation unit 41, a prediction unit 42, and an interpretation unit 43. Note that the retirement prediction learning section 20 is an example of a retirement prediction learning device of the present invention, and the retirement prediction section 40 is an example of a retirement prediction device of the present invention. Each functional configuration is realized by the CPU 11 reading out a retirement prediction learning program and a retirement prediction program stored in the ROM 12 or the storage 14, loading them onto the RAM 13, and executing them.
 まず、退職予測学習部20について説明する。退職予測学習部20は、後述する予測モデル31の学習時に機能する機能構成である。 First, the retirement prediction learning section 20 will be explained. The retirement prediction learning unit 20 is a functional configuration that functions when learning a prediction model 31, which will be described later.
 フィルタ部21は、現職者DB51から現職者の人事関連情報を取得すると共に、退職者DB52から退職者の人事関連情報を取得する。人事関連情報は、基本情報、所属、給与、勤怠、目標設定、評価等のデータ区分の情報を含む。基本情報、所属、給与、勤怠、目標設定、評価等の各情報は、さらに、例えば図3に示すような各項目の情報を含む。図3の例では、各データ区分の項目の情報から特徴量を生成する際に、統計処理を行うか否かを示す「統計処理有無」についても示している。 The filter unit 21 acquires personnel-related information of the incumbent from the incumbent DB 51 and acquires personnel-related information of the retiree from the retiree DB 52. The personnel-related information includes information on data categories such as basic information, affiliation, salary, attendance, goal setting, and evaluation. The information such as basic information, affiliation, salary, attendance, goal setting, and evaluation further includes information on each item as shown in FIG. 3, for example. The example in FIG. 3 also shows "statistical processing presence/absence" indicating whether statistical processing is to be performed when generating feature amounts from the information of the items of each data category.
 現職者DB51及び退職者DB52の各々には、現職者及び退職者の各々の識別情報である個人コードと対応付けて、所定の開始時点(例えば、入社時)から現時点又は退職時点までの人事関連情報が、所定期間(例えば、1ヶ月)毎に記憶されている。また、退職者DB52には、退職者毎に、退職日、及び退職時の面談等による聞き取りで取得された退職理由も記憶されている。さらに、退職者DB52には、退職者毎に、後述する生成部22で人事関連情報から生成された特徴ベクトルも記憶される。 In each of the current employee DB 51 and retiree DB 52, personnel-related information from a predetermined start point (for example, when joining the company) to the present time or the time of retirement is stored in association with a personal code that is the identification information of each of the current employee and retiree. Information is stored every predetermined period (eg, one month). In addition, the retiree DB 52 also stores, for each retiree, the date of retirement and the reason for retirement obtained through an interview or the like at the time of retirement. Furthermore, the retiree DB 52 also stores, for each retiree, a feature vector generated from personnel-related information by a generation unit 22, which will be described later.
 フィルタ部21は、取得した退職者及び現職者の各々の全期間の人事関連情報から、予測モデル31を学習するための特徴ベクトルの生成に用いる期間の人事関連情報を抽出する。具体的には、フィルタ部21は、退職者については、退職時点から遡って第1期間分前の時点以前の人事関連情報を抽出する。また、フィルタ部21は、現職者については、現時点から遡って第1期間分前の時点からさらに遡って第2期間分前の時点以前の人事関連情報を抽出する。この第2期間としては、複数の期間が設定される。 The filter unit 21 extracts the personnel-related information for the period used to generate the feature vector for learning the prediction model 31 from the acquired personnel-related information for the entire period of each of the retirees and current employees. Specifically, for retirees, the filter unit 21 extracts personnel-related information prior to a first period before the time of retirement. In addition, for the current employee, the filter unit 21 extracts personnel-related information from a point in time that is a first period before the present time and a point in time that is a second period before the present time. A plurality of periods are set as this second period.
 例えば、入社月から最新月又は退職月までの1ヶ月毎の人事関連情報から、Nヶ月後の退職確率を予測するための予測モデル31の学習用の人事関連情報を抽出する場合について説明する。フィルタ部21は、退職者について、図4の上図に示すように、入社月から退職月までの人事関連情報のうち、退職月から遡ってNヶ月分の人事関連情報を不使用とし、Nヶ月前から遡って入社月までの人事関連情報を、学習に使用する人事関連情報として抽出する。また、フィルタ部21は、現職者については、図4の下図に示すように、入社月から退職月までの人事関連情報のうち、退職月から遡ってNヶ月+Xヶ月分の人事関連情報を不使用とする。そして、フィルタ部21は、退職月から遡ってNヶ月+Xヶ月の時点から遡って入社月までの人事関連情報を、学習に使用する人事関連情報として抽出する。 For example, a case will be described in which personnel-related information for learning the prediction model 31 for predicting the probability of retirement N months later is extracted from personnel-related information for each month from the month of employment to the latest month or month of retirement. For retirees, as shown in the upper diagram of FIG. 4, the filter unit 21 discards personnel-related information for N months from the month of retirement as unused, out of personnel-related information from the month of employment to the month of retirement. Extract personnel-related information from the previous month up to the month of joining the company as personnel-related information to be used for learning. In addition, for the current employee, as shown in the lower part of FIG. shall be used. Then, the filter unit 21 extracts personnel-related information from N months +
 現職者について、最新月から遡ってNヶ月分に加え、Xヶ月分の人事関連情報も不使用とする理由について説明する。単純に最新月からNヶ月分を不使用とすると、最新月に該当する月に退職する確率を予測するモデルが学習されてしまう。すなわち、抽出した人事関連情報から生成される特徴量にリーク特徴量が含まれてしまう。例えば、最新月が12月、N=3の場合で、単純に最新月からNヶ月分を不使用とすると、現職者については全て9月までの人事関連情報を使うことになる。例えば、人事関連情報に含まれる勤怠のデータから、後述する統計処理により「休日の数」という特徴量を生成する場合、現職者の「休日の数」という特徴量の値が全て同じになり、結果的にリーク特徴量となってしまう。 For the current employee, we will explain the reason why personnel-related information for X months is not used in addition to N months' worth of personnel information going back from the latest month. If N months from the latest month are simply not used, a model that predicts the probability of retiring in the month corresponding to the latest month will be learned. That is, the leaked feature amount is included in the feature amount generated from the extracted personnel-related information. For example, if the latest month is December and N=3, if N months from the latest month are simply not used, all personnel related information up to September will be used for the current employee. For example, when a feature quantity called "number of holidays" is generated from attendance data included in personnel-related information by statistical processing described later, the value of the feature quantity called "number of holidays" for all incumbents will be the same, As a result, it becomes a leak feature.
 そこで、最新月から遡ってNヶ月分に加え、Xヶ月分の人事関連情報も不使用とするものである。なお、Xは0以上の任意の数を複数用意し、現職者毎に割り振る。現職者へのXの割り振りはランダムでもよいし、例えば勤続年数に相関させるようにしてもよい。後者の場合、勤続年数が短ければXは小さい値を、勤続年数が長ければ大きな値をXとして設定することで、N+Xヶ月の期間が、入社月から最新月の期間より大きくなることによる欠損を生じさせず、可能な限り多くの人事関連情報を利用することができる。 Therefore, in addition to N months' worth of personnel-related information going back from the latest month, X months' worth of personnel-related information is also not used. Note that a plurality of arbitrary numbers greater than or equal to 0 are prepared for X, and allocated to each incumbent. The allocation of X to incumbents may be random, or may be correlated to years of service, for example. In the latter case, if the number of years of service is short, set X to a small value, and if the number of years of service is long, set X to a large value to eliminate the loss caused by the period of N+X months being larger than the period from the month of joining to the latest month. It is possible to utilize as much personnel-related information as possible without causing any problems.
 フィルタ部21は、第1期間を、Nヶ月、N+1ヶ月、・・・、N+Lヶ月(Lは任意の整数)として、各第1期間に応じた人事関連情報を抽出する。フィルタ部21は、退職者及び現職者の各々について抽出した期間の人事関連情報を生成部22へ受け渡す。 The filter unit 21 sets the first periods to N months, N+1 months, . . . , N+L months (L is an arbitrary integer) and extracts personnel-related information according to each first period. The filter unit 21 passes the personnel-related information for the period extracted for each of retirees and current employees to the generation unit 22.
 生成部22は、フィルタ部21から受け渡された人事関連情報から複数の特徴量を含む特徴ベクトルを生成する。例えば、生成部22は、人事関連情報のうち、フィルタ部21で抽出された期間で値に変更がない項目、又は値が不定期に変更される項目(以下、「静的項目」という)については、項目の値をそのまま、又は項目の値をカテゴリ変数に変換して特徴量を生成する。また、例えば、生成部22は、人事関連情報のうち、値が数値、かつ定期的に変更される項目(以下、「統計的項目」という)については、直近の所定期間分の値を統計的に処理することにより特徴量を生成する。 The generation unit 22 generates a feature vector including a plurality of feature quantities from the personnel-related information passed from the filter unit 21. For example, the generation unit 22 generates information about items whose values do not change during the period extracted by the filter unit 21 or items whose values change irregularly (hereinafter referred to as “static items”) among personnel-related information. generates feature amounts using item values as they are or by converting item values into categorical variables. For example, for items of personnel-related information whose values are numerical and whose values are changed regularly (hereinafter referred to as "statistical items"), the generation unit 22 statistically calculates the values for the most recent predetermined period. The feature amount is generated by processing.
 より具体的には、生成部22は、図5に示すように、人事関連情報に含まれる各項目について、その項目が属するデータ区分及び値の種類(数値、カテゴリ変数、テキスト等)に応じた処理を順次適用して特徴量を生成する。図5の例では、生成部22は、まず、各項目の値から、予め定めたルールに基づいて異常値を除去する。次に、生成部22は、例えば、同一人物について複数の人事関連情報が存在する場合等に名寄せ処理を実行する。次に、生成部22は、給与、勤怠、目標、評価等の数値の統計的項目については、統計処理(詳細は後述)を行う。次に、生成部22は、値が数値の項目について、値が0~1の範囲になるように変換することで標準化する。次に、値が数値の項目について、例えば雇用区分が同一の人物同士をグループ化したうえで、グループ単位で値を標準化するグループ標準化の処理を実行する。次に、生成部22は、値がカテゴリ変数の項目について、値をワンホットエンコーディングする。次に、生成部22は、値がテキストの項目について、単語毎に分かち書きし、全人事関連情報における単語の出現率に関する指標(例えばTF-IDF等)により、各単語に重み付けする。次に、生成部22は、各項目について、欠損値が存在する場合、その項目の平均値、最頻値等を用いてその欠損値を補完する。 More specifically, as shown in FIG. 5, the generation unit 22 generates data for each item included in the personnel-related information according to the data category to which the item belongs and the type of value (numeric value, categorical variable, text, etc.). Generate features by sequentially applying processing. In the example of FIG. 5, the generation unit 22 first removes abnormal values from the values of each item based on a predetermined rule. Next, the generation unit 22 executes a name matching process, for example, when a plurality of pieces of personnel-related information exist for the same person. Next, the generation unit 22 performs statistical processing (details will be described later) for numerical statistical items such as salary, attendance, goals, and evaluation. Next, the generation unit 22 standardizes items whose values are numerical values by converting them so that the values are in the range of 0 to 1. Next, for items whose values are numerical values, for example, people with the same employment category are grouped together, and then group standardization processing is executed to standardize the values on a group-by-group basis. Next, the generation unit 22 performs one-hot encoding on the items whose values are categorical variables. Next, the generation unit 22 separates the text items into words, and weights each word using an index (for example, TF-IDF) regarding the appearance rate of the word in all personnel-related information. Next, if there is a missing value for each item, the generation unit 22 complements the missing value using the average value, mode, etc. of the item.
 なお、図5に示す各処理は全てが必須ではない。例えば、異常値除去、グループ標準化、及び欠損値の補完は省略してもよい。また、一部の処理については順序を入れ替えて実行してもよい。 Note that not all of the processes shown in FIG. 5 are essential. For example, outlier removal, group standardization, and missing value interpolation may be omitted. Further, some of the processes may be executed with the order changed.
 ここで、静的項目及び統計的項目のそれぞれについての人事関連情報の使用範囲について説明する。例えば、在籍年数、資格等級、部署、雇用区分、業績内容等の静的項目については、図6に示すように、フィルタ部21で抽出された全期間(網掛部分)を使用する。期間内で値の変更があった項目については、それら複数の値を結合した特徴量を生成してもよいし、最新の値に、値が変更になったことを示すフラグを立てた特徴量を生成してもよい。 Here, the scope of use of personnel-related information for each of static items and statistical items will be explained. For example, for static items such as years of employment, qualification grade, department, employment category, and performance details, the entire period (shaded area) extracted by the filter section 21 is used, as shown in FIG. For items whose values have changed within the period, you can generate a feature by combining multiple values, or you can create a feature by setting a flag to the latest value to indicate that the value has changed. may be generated.
 給与、勤怠等の毎月値が変動する統計的項目は、図6に示すように、フィルタ部21で抽出された期間のうち直近のMヶ月分を使用する。すなわち、退職者については、退職月から遡ってNヶ月の時点から遡ってMヶ月分、現職者については、退職月から遡ってN+Xヶ月の時点から遡ってMヶ月分である。なお、目標や評価等の項目は、例えば、年度毎、半期毎等で変更される場合が多いため、人事関連情報を使用する期間をMヶ月に限定することなく、全期間を使用してもよい。 As shown in FIG. 6, for statistical items such as salary and attendance whose values change every month, the most recent M months of the period extracted by the filter unit 21 are used. That is, for retirees, the amount is M months from N months from the month of retirement, and for current employees, it is M months from N+X months from the month of retirement. Note that items such as goals and evaluations are often changed, for example, every fiscal year or every half year, so the period for using personnel-related information is not limited to M months, and even if the entire period is used. good.
 図7に、勤怠データの「総労働時間」という項目を統計処理する例を示す。図7の上図は、Mヶ月分(図7の例では、M=3)の各月の日毎の総労働時間のデータである。生成部22は、このMヶ月分の総労働時間について、月毎の最小値、最大値、合計、平均、中央値、標準偏差等を求める統計処理を行って特徴量を生成する。 FIG. 7 shows an example of statistical processing of the item "total working hours" of attendance data. The upper diagram of FIG. 7 shows data on the total working hours for each day of each month for M months (M=3 in the example of FIG. 7). The generation unit 22 performs statistical processing to obtain monthly minimum values, maximum values, totals, averages, medians, standard deviations, etc. for the total working hours for M months to generate feature quantities.
 生成部22は、現職者及び退職者の各々について、上記のように生成した複数の特徴量を要素とする特徴ベクトルを生成する。また、生成部22は、第1期間を、Nヶ月、N+1ヶ月、・・・、N+Lヶ月(Lは任意の整数)として抽出された人事関連情報毎に、特徴ベクトルを生成する。生成部22は、生成した特徴ベクトルを学習部23へ受け渡す。また、生成部22は、退職者の特徴ベクトルを退職者DB52に記憶する。 The generation unit 22 generates a feature vector whose elements are the plurality of feature amounts generated as described above for each of the incumbent and retired person. Furthermore, the generation unit 22 generates a feature vector for each piece of personnel-related information extracted by setting the first period to N months, N+1 months, . . . , N+L months (L is an arbitrary integer). The generation unit 22 passes the generated feature vector to the learning unit 23. Further, the generation unit 22 stores the retiree's feature vector in the retiree DB 52.
 学習部23は、生成部22から受け渡された特徴ベクトルを学習データとして、現職者についての特徴ベクトルが入力された場合に、将来の複数の時点における、現職者が退職する可能性を示す退職確率を予測する予測モデル31を学習する。具体的には、学習部23は、第1期間をNヶ月として抽出された人事関連情報から生成された特徴ベクトルを用いて、Nヶ月後の退職確率を予測するNヶ月後退職予測モデル31Nを学習する。この際、学習部23は、複数の退職者の各々について生成された特徴ベクトルを正例(「退職確率の正解=1」)、複数の現職者の各々について生成された特徴ベクトルを負例(「退職確率の正解=0」)とする。すなわち、学習部23は、正例の特徴ベクトルが入力された際のNヶ月後退職予測モデル31Nの出力が1に近くなり、負例の特徴ベクトルが入力された際のNヶ月後退職予測モデル31Nの出力が0に近くなるように、Nヶ月後退職予測モデル31Nのパラメータを学習する。 The learning unit 23 uses the feature vectors received from the generating unit 22 as learning data, and when the feature vectors about the incumbent are input, the learning unit 23 generates retirement information indicating the possibility of the incumbent retiring at multiple points in the future. A prediction model 31 that predicts probabilities is learned. Specifically, the learning unit 23 uses a feature vector generated from personnel-related information extracted with the first period as N months to create a retirement prediction model 31N after N months that predicts the probability of retirement after N months. learn. At this time, the learning unit 23 uses the feature vectors generated for each of the plurality of retirees as a positive example (“retirement probability correct answer = 1”), and the feature vector generated for each of the plurality of incumbents as a negative example (“retirement probability correct answer = 1”). "Correct answer for retirement probability = 0"). That is, the learning unit 23 outputs the retirement prediction model 31N after N months when the feature vector of the positive example is inputted is close to 1, and when the feature vector of the negative example is inputted, the output of the retirement prediction model 31N after N months is input. The parameters of the N-month retirement prediction model 31N are learned so that the output of 31N becomes close to 0.
 学習部23は、N+1ヶ月後の退職確率を予測するN+1ヶ月後退職予測モデル31N+1、・・・、N+Lヶ月後の退職確率を予測するN+Lヶ月後退職予測モデル31N+Lの各々についても同様に学習する。学習部23は、Nヶ月後退職予測モデル31N、N+1ヶ月後退職予測モデル31N+1、・・・、N+Lヶ月後退職予測モデル31N+Lで構成される予測モデル31を退職予測処理装置10の所定の記憶領域に記憶する。 The learning unit 23 similarly learns each of the N+1 month later retirement prediction models 31N+1, . . The learning unit 23 stores the prediction model 31 composed of the N months later retirement prediction model 31N, the N+1 months later retirement prediction model 31N+1, . . . , the N+L months later retirement prediction model 31N+L in a predetermined storage area of the retirement prediction processing device 10. to be memorized.
 また、学習部23は、予測モデル31に基づいて、予測モデル31の予測結果に対する、特徴ベクトルに含まれる各特徴量の貢献度を算出する解釈モデル32を生成する。例えば、学習部23は、SHAP(SHapley Additive exPlanations)を利用して解釈モデル32を生成する。学習部23は、生成した解釈モデル32を退職予測処理装置10の所定の記憶領域に記憶する。 Furthermore, the learning unit 23 generates an interpretation model 32 based on the prediction model 31, which calculates the degree of contribution of each feature included in the feature vector to the prediction result of the prediction model 31. For example, the learning unit 23 generates the interpretation model 32 using SHAP (SHApley Additive exPlanations). The learning unit 23 stores the generated interpretation model 32 in a predetermined storage area of the retirement prediction processing device 10.
 次に、退職予測部40について説明する。退職予測部40は、予測対象の現職者についての退職確率及び退職理由の予測時に機能する機能構成である。 Next, the retirement prediction unit 40 will be explained. The retirement prediction unit 40 is a functional configuration that functions when predicting the probability of retirement and the reason for retirement for an incumbent to be predicted.
 生成部41は、退職予測処理装置10に入力される予測対象の現職者の人事関連情報を取得する。予測対象の現職者の人事関連情報は、退職予測学習部20で説明した人事関連情報と同様である。生成部41は、退職予測学習部20の生成部22と同様に、予測対象の現職者の人事関連情報から特徴ベクトルを生成する。この際、生成部41は、最新月から遡った期間の人事関連情報を用いる。 The generation unit 41 acquires the personnel-related information of the current employee to be predicted, which is input into the retirement prediction processing device 10. The personnel-related information of the incumbent who is the prediction target is the same as the personnel-related information described in the retirement prediction learning section 20. Similar to the generation unit 22 of the retirement prediction learning unit 20, the generation unit 41 generates a feature vector from the personnel-related information of the incumbent who is the prediction target. At this time, the generation unit 41 uses personnel-related information for a period dating back from the latest month.
 予測部42は、予測モデル31に予測対象の現職者の特徴ベクトルを入力することで、予測対象の現職者の将来の複数の時点における退職確率を予測する。具体的には、予測部42は、図8の上図に示すように、予測対象の現職者の特徴ベクトルを、予測モデル31を構成するNヶ月後退職予測モデル31N、N+1ヶ月後退職予測モデル31N+1、・・・、N+Lヶ月後退職予測モデル31N+Lの各々に入力する。これにより、予測部42は、Nヶ月後の退職確率、N+1ヶ月後の退職確率、・・・、N+Lヶ月後の退職確率を得る。 The prediction unit 42 predicts the retirement probability of the incumbent to be predicted at multiple points in the future by inputting the feature vector of the incumbent to be predicted to the prediction model 31. Specifically, as shown in the upper diagram of FIG. 8, the prediction unit 42 converts the feature vector of the incumbent to be predicted into the N months later retirement prediction model 31N and the N+1 months later retirement prediction model that constitute the prediction model 31. 31N+1, . . . , N+L months later retirement prediction model 31N+L. Thereby, the prediction unit 42 obtains the probability of retirement after N months, the probability of retirement after N+1 months, . . . the probability of retirement after N+L months.
 このように、将来の複数の時点における退職確率を予測することで、タイムリーに退職防止策を打つことが可能になる。例えば、予測結果に基づいて、短期的には退職する兆候は少ないが、現状のままでは退職確率が向上していき、長期的には退職する兆候が見られるため、早めに退職防止策が必要な現職者である等の判断が可能になる。また、例えば、3ヶ月後に退職確率が上昇傾向にあるため、3ヶ月間で徐々に稼働を減らす等の退職防止策を打つことにより、退職防止につなげることができる。 In this way, by predicting the probability of retirement at multiple points in the future, it becomes possible to take timely measures to prevent retirement. For example, based on the prediction results, there are few signs of retirement in the short term, but if the current situation continues, the probability of retirement will increase, and there are signs of retirement in the long term, so early retirement prevention measures are necessary. This makes it possible to determine whether a person is a current employee. Furthermore, for example, since the probability of quitting after three months is on the rise, it is possible to prevent quitting by taking measures to prevent quitting, such as gradually reducing work hours over a three-month period.
 また、予測部42は、図8の下図に示すように、Nヶ月後の退職確率、N+1ヶ月後の退職確率、・・・、N+Lヶ月後の退職確率から、退職確率の時系列変化を示すグラフを作成してもよい。図9に、N=6、L=5とし、3月の時点で予測した予測確率の時系列変化、すなわち、6ヶ月後(9月)~11ヶ月後(2月)までの退職確率のグラフを示す。図9の例では、退職確率の高低を判断するための閾値も示している。このようなグラフを出力することで、1月に退職確率が大きく上がっているため、その前に何かしらの退職防止策が必要である等の判断を行うことができる。予測部42は、予測対象の現職者の特徴ベクトル、及びNヶ月後の退職確率、N+1ヶ月後の退職確率、・・・、N+Lヶ月後の退職確率を解釈部43へ受け渡す。 Further, as shown in the lower diagram of FIG. 8, the prediction unit 42 indicates a time-series change in the retirement probability from the retirement probability after N months, the retirement probability after N+1 months, ..., the retirement probability after N+L months. You may also create a graph. Figure 9 shows a graph of the time-series change in the predicted probability predicted as of March, with N=6 and L=5, that is, the retirement probability from 6 months later (September) to 11 months later (February). shows. The example in FIG. 9 also shows a threshold value for determining whether the retirement probability is high or low. By outputting such a graph, it can be determined that the probability of retirement has increased significantly in January and that some kind of retirement prevention measure is necessary before that happens. The prediction unit 42 passes the feature vector of the incumbent to be predicted, the probability of retirement in N months, the probability of retirement in N+1 months, . . . the probability of retirement in N+L months to the interpretation unit 43.
 解釈部43は、予測モデル31と、予測対象の現職者の特徴ベクトルと、解釈モデル32とを用いて、特徴ベクトルに含まれる各特徴量の予測結果に対する貢献度を算出する。全ての特徴量の貢献度を、例えば、人事部員や現職者の管理者等の担当者に提示した場合、特徴量の数が多いと、担当者が確認することが困難になる。そこで、解釈部43は、貢献度が所定値以上の特徴量、又は貢献度が高い順に上位所定個の特徴量を、予測モデル31が予測結果を予測した根拠として提示する。 The interpretation unit 43 uses the prediction model 31, the feature vector of the incumbent to be predicted, and the interpretation model 32 to calculate the degree of contribution of each feature included in the feature vector to the prediction result. If the contribution of all feature quantities is presented to a person in charge, such as a human resources department employee or a current manager, if the number of feature quantities is large, it will be difficult for the person in charge to check. Therefore, the interpretation unit 43 presents the feature quantities whose degree of contribution is greater than or equal to a predetermined value, or the predetermined number of feature quantities whose degree of contribution is higher than the predetermined value, as the basis on which the prediction model 31 has predicted the prediction result.
 図10に、特徴量毎の貢献度の一例を示す。図10の例では、横軸が貢献度(0.0~1.0)であり、各特徴量についてかっこ書きで示した数値は、その特徴量の値である。担当者は、貢献度の高い特徴量の値を確認することで、退職確率が高い現職者について、退職予測理由を判断することができる。 FIG. 10 shows an example of the degree of contribution for each feature amount. In the example of FIG. 10, the horizontal axis is the degree of contribution (0.0 to 1.0), and the numerical value shown in parentheses for each feature is the value of that feature. By checking the values of the feature values with high contribution degrees, the person in charge can determine the reason for predicting retirement for incumbents who have a high probability of retiring.
 例えば、図10の例では、担当者は、予測対象の現職者が年末年始休暇や夏季休暇等の特休を取得していないこと、1ヶ月前の総労働時間が比較的少なく休みがちであること等を把握できる。また、担当者は、予測対象の現職者が14時開始のシフトに従事したことを把握し、このことから、予測対象の現職者は朝が苦手、又は朝に用事がある等の予想をすることができる。また、担当者は、予測対象の現職者が3ヶ月前の2回遅刻をしており、その合計の遅刻時間が272分と非常に多いことを把握し、このことから、予測対象の現職者に体調不良が生じていることを予想することができる。また、担当者は、予測対象の現職者の勤続年数は3年以上と比較的長く在籍していることを把握することができる。また、担当者は、予測対象の現職者の総労働時間の変動が大きく、1ヶ月前にも遅刻を3回しており、休みや遅刻が増えていると予想することができる。担当者は、これらを総合的に考慮して、予測対象の現職者の退職予測理由を「体調不良」と判断することができる。 For example, in the example shown in Figure 10, the person in charge knows that the current employee to be predicted does not take special leave such as year-end and New Year vacation or summer vacation, and that the total number of working hours one month ago was relatively low and he/she tends to take days off. I can understand things, etc. In addition, the person in charge knows that the incumbent who is the target of the prediction was engaged in a shift that started at 2:00 p.m., and from this, the person in charge makes a prediction that the incumbent is not good at mornings or has something to do in the morning. be able to. In addition, the person in charge learned that the incumbent who was the target of the prediction was late twice three months ago, and that the total tardiness time was 272 minutes, which was extremely long. It can be predicted that the patient is feeling unwell. In addition, the person in charge can understand that the current employee who is the target of prediction has been employed for a relatively long period of three years or more. In addition, the person in charge can predict that the incumbent's total working hours are subject to large fluctuations, and that the incumbent has been late three times in the past month, and that the number of days off and tardiness is increasing. By comprehensively considering these factors, the person in charge can determine that the reason for the predicted retirement of the current employee to be predicted is "poor health."
 なお、貢献度の高い特徴量及びその特徴量の値だけでは、人間が見ても分かり難い内容になっている場合もある。そこで、特徴量と特徴量の値との組み合わせを、解釈用の自然言語に変換することにより、機械学習に詳しくない場合でも人間による解釈を容易にする。具体的には、解釈部43は、特徴量と特徴量の値との組み合わせと、その組み合わせを解釈した自然言語によるテキストとを対応付けて記憶した対応表33を参照して、貢献度の高い特徴量、すなわち予測の根拠を自然言語に変換する。 Note that the content may be difficult for a human to understand using only the feature quantities with a high degree of contribution and the values of the feature quantities. Therefore, by converting combinations of feature quantities and feature quantity values into natural language for interpretation, interpretation by humans is facilitated even by those who are not familiar with machine learning. Specifically, the interpretation unit 43 refers to the correspondence table 33 that stores combinations of feature quantities and feature quantity values in association with texts written in natural language in which the combinations are interpreted, and selects the combinations that have a high degree of contribution. Convert features, or the basis for prediction, into natural language.
 図11に、対応表33の一例を示す。図11の例では、特徴量名と、特徴量の値と、解釈用の自然言語と、退職予測理由とが対応付けて記憶されている。解釈部43は、解釈モデル32を用いて抽出した貢献度の高い特徴量と、その特徴量の値との組み合わせに対応する解釈用の自然言語及び退職予測理由を対応表33から取得する。 FIG. 11 shows an example of the correspondence table 33. In the example of FIG. 11, the feature name, the value of the feature, the natural language for interpretation, and the retirement prediction reason are stored in association with each other. The interpretation unit 43 acquires from the correspondence table 33 the natural language for interpretation and the retirement prediction reason corresponding to the combination of the highly contributing feature extracted using the interpretation model 32 and the value of the feature.
 また、解釈部43は、複数の退職者の中から、退職者の各特徴量の貢献度と、予測対象の現職者の各特徴量の貢献度との類似度が所定値以上、又は類似度が高い順に上位所定数の退職者を抽出する。具体的には、解釈部43は、退職者DB52から、複数の退職者の特徴ベクトルを取得し、図12に示すように、各退職者の特徴ベクトルと解釈モデル32とを用いて、退職者毎に各特徴量の貢献度を算出し、算出結果をリスト化する。図13に、貢献度の算出結果のリストの一例を示す。解釈部43は、予測対象の現職者及び複数の退職者の各々の特徴量毎の貢献度を要素とする貢献度ベクトルを作成し、予測対象の現職者と退職者との間の貢献度ベクトルのユークリッド距離等を類似度として算出する。そして、解釈部43は、予測対象の現職者の貢献度ベクトルとの類似度が高い順、すなわち貢献度ベクトル間の距離が小さい順に退職者を並び替えて、類似度が所定値以上、又は類似度が高い順に上位所定数の退職者を抽出する。 Further, the interpretation unit 43 determines whether, from among the plurality of retirees, the degree of similarity between the degree of contribution of each feature quantity of the retiree and the degree of contribution of each feature quantity of the incumbent person to be predicted is equal to or greater than a predetermined value, or the degree of similarity A predetermined number of retirees are extracted from the top in descending order of their values. Specifically, the interpretation unit 43 acquires feature vectors of a plurality of retirees from the retiree DB 52, and uses the feature vectors of each retiree and the interpretation model 32 as shown in FIG. The contribution of each feature is calculated for each feature, and the calculation results are listed. FIG. 13 shows an example of a list of contribution calculation results. The interpretation unit 43 creates a contribution vector whose elements are the contribution degrees for each characteristic value of the incumbent and the retirees to be predicted, and creates a contribution vector between the incumbent and the retirees to be predicted. The Euclidean distance between the two is calculated as the degree of similarity. Then, the interpretation unit 43 sorts the retirees in descending order of similarity to the contribution vector of the incumbent to be predicted, that is, in descending order of the distance between the contribution vectors. A predetermined number of top retirees are extracted in descending order of degree.
 解釈部43は、抽出した退職者について、退職者DB52に記憶されている退職理由を取得する。また、解釈部43は、図13に示すようなリストから、抽出した退職者について、貢献度が最も高い特徴量を特定し、その特徴量と値との組み合わせに対応する退職理由を対応表33から取得してもよい。予測対象の現職者と特徴量の貢献度が類似する退職者において貢献度の高い特徴量は、退職理由に影響している可能性が高いとの考えに基づく。図13の例では、解釈部43は、予測対象の現職者との特徴量の貢献度が最も高いYさんの特徴量1に対応する退職理由を取得する。 The interpretation unit 43 acquires the reason for retirement stored in the retiree DB 52 for the extracted retiree. Furthermore, the interpretation unit 43 identifies the feature with the highest degree of contribution for the extracted retiree from the list as shown in FIG. You may also obtain it from This is based on the idea that features with high contributions for retirees whose feature values are similar to those of incumbents to be predicted are likely to have an influence on the reason for retirement. In the example of FIG. 13, the interpretation unit 43 acquires the reason for retirement corresponding to the feature quantity 1 of Mr. Y whose feature quantity has the highest degree of contribution to the current incumbent who is the prediction target.
 解釈部43は、上記のように解釈した各情報を記載した予測結果リストを作成する。図14に、予測結果リストの一例を示す。図14は、予測部42で予測されたNヶ月後の退職確率が所定値以上の現職者についての予測結果をリスト化した例である。図14の予測結果リストには、該当の現職者の「個人コード」に対応付けて、「Nヶ月後退職確率」、「退職予測理由」、「予測根拠トップK」、及び「似た傾向の過去の退職者」が含まれる。 The interpretation unit 43 creates a prediction result list that describes each piece of information interpreted as described above. FIG. 14 shows an example of the prediction result list. FIG. 14 is an example of a list of prediction results for incumbents whose probability of retirement after N months predicted by the prediction unit 42 is equal to or greater than a predetermined value. The prediction result list in Figure 14 includes information such as ``Probability of retiring after N months'', ``Retirement prediction reasons'', ``Top K prediction grounds'', and ``Retirement probability in N months'', and ``Top K prediction grounds'', in association with the ``Personal code'' of the relevant incumbent. Includes "past retirees."
 「Nヶ月後退職確率」は、予測部42から受け渡されたNヶ月後の退職確率である。「退職予測理由」は、貢献度の高い特徴量と、その特徴量の値との組み合わせに基づいて、対応表33から取得した退職予測理由である。「予測根拠トップK」は、貢献度の高い順に上位K個(図14の例では、K=3)の特徴量と、その特徴量の値との組み合わせを、対応表33を参照して自然言語に変換したテキストである。なお、図14の例では、K個の特徴量の各々の貢献度の和が1となるように、各特徴量の貢献度を標準化した値を参考情報として併記している。「似た傾向の過去の退職者」は、特徴量の貢献度の類似度に基づいて抽出した退職者の個人コード及びその退職者の退職理由である。なお、図14の例では、1名の退職者の情報のみを記載しているが、複数の退職者の情報を記載するようにしてもよい。このように予測結果及び解釈した情報をリスト化することで、退職確率や退職理由で現職者をフィルタして、重点的に対処が必要な現職者を素早く検索することができる。 The "probability of retirement after N months" is the probability of retirement after N months, which is passed from the prediction unit 42. “Retirement prediction reason” is a retirement prediction reason obtained from the correspondence table 33 based on a combination of a feature amount with a high degree of contribution and a value of the feature amount. “Top K prediction grounds” is a combination of the top K features (in the example of FIG. 14, K=3) and the values of the features in descending order of degree of contribution, by referring to the correspondence table 33. This is text that has been converted into a language. In the example of FIG. 14, a value obtained by standardizing the contribution degree of each feature amount is also written as reference information so that the sum of the contribution degrees of each of the K feature amounts is 1. "Past retirees with similar tendencies" is the personal code of the retiree and the reason for retirement of the retiree, which are extracted based on the similarity of the degree of contribution of the feature amount. Note that in the example of FIG. 14, only information on one retiree is described, but information on a plurality of retirees may be recorded. By creating a list of prediction results and interpreted information in this way, it is possible to filter incumbents based on their probability of leaving or reason for leaving, and quickly search for incumbents who require priority attention.
 解釈部43は、さらに、N+1ヶ月後、・・・、N+Lヶ月後のそれぞれの予測結果リストを同様に作成する。これにより、今すぐ対処する必要のある現職者はNヶ月後の予測結果リストを確認し、長期的に対処する必要のある現職者はN+Lヶ月後の予測結果リストを確認するなど、柔軟に活用することができる。 The interpretation unit 43 further similarly creates prediction result lists for N+1 months later, . . . , N+L months later. This allows incumbents who need to take immediate action to check the list of predicted results for N months from now, while incumbents who need to take action over the long term can check the list of predicted results for N+L months. can do.
 解釈部43は、作成した予測結果リストを出力する。また、解釈部43は、上述した退職確率の時系列を示すグラフを合わせて出力してもよい。このように、将来の複数の時点における退職確率と共に、予測根拠を解釈し易い形にして提示することで、なぜ予測対象の現職者が退職しようと検討しているかを把握できるようにし、適切な退職防止策を打つことを可能にする。 The interpretation unit 43 outputs the created prediction result list. Furthermore, the interpretation unit 43 may also output a graph showing the time series of the retirement probabilities described above. In this way, by presenting the basis for the prediction in an easy-to-interpret format along with the probability of retirement at multiple points in the future, it is possible to understand why the current employee being predicted is considering retiring, and to make appropriate decisions. Make it possible to take measures to prevent retirement.
 次に、退職予測処理装置10の作用について説明する。図15は、退職予測処理装置10による学習処理の流れを示すフローチャートである。CPU11がROM12又はストレージ14から退職予測学習プログラムを読み出して、RAM13に展開して実行することにより、学習処理が行なわれる。なお、学習処理は、本発明の退職予測学習方法の一例である。 Next, the operation of the retirement prediction processing device 10 will be explained. FIG. 15 is a flowchart showing the flow of learning processing by the retirement prediction processing device 10. The learning process is performed by the CPU 11 reading out the retirement prediction learning program from the ROM 12 or the storage 14, loading it onto the RAM 13, and executing it. Note that the learning process is an example of the retirement prediction learning method of the present invention.
 ステップS11において、CPU11は、フィルタ部21として、現職者DB51から現職者の人事関連情報を取得すると共に、退職者DB52から退職者の人事関連情報を取得する。 In step S<b>11 , the CPU 11 , as the filter unit 21 , acquires the personnel-related information of the incumbent from the incumbent DB 51 and acquires the personnel-related information of the retiree from the retiree DB 52 .
 次に、ステップS12で、CPU11は、フィルタ部21として、退職者については、退職時点から遡って第1期間分前の時点以前の人事関連情報を抽出する。また、フィルタ部21は、現職者については、現時点から遡って第1期間分前の時点からさらに遡って第2期間分前の時点以前の人事関連情報を抽出する。 Next, in step S12, the CPU 11, as the filter unit 21, extracts, for the retiree, personnel-related information up to the first period before the time of retirement. In addition, for the current employee, the filter unit 21 extracts personnel-related information from a point in time that is a first period before the present time and a point in time that is a second period before the present time.
 次に、ステップS13で、CPU11は、生成部22として、上記ステップS12で抽出された現職者及び退職者の各々の人事関連情報の各項目について、統計処理、カテゴリ変数への変換等の各種処理を行って、複数の特徴量を含む特徴ベクトルを生成する。 Next, in step S13, the CPU 11, as the generation unit 22, performs various processing such as statistical processing and conversion into categorical variables for each item of personnel-related information of the current employees and retirees extracted in step S12. to generate a feature vector containing multiple feature quantities.
 次に、ステップS14で、CPU11は、学習部23として、上記ステップS13で生成された退職者の特徴ベクトルを正例、現職者の特徴ベクトルを負例として、予測モデル31を学習する。次に、ステップS15で、CPU11は、学習部23として、学習した予測モデル31に基づいて、予測モデル31の予測結果に対する、特徴ベクトルに含まれる各特徴量の貢献度を算出する解釈モデル32を生成し、学習処理は終了する。 Next, in step S14, the CPU 11, as the learning unit 23, learns the prediction model 31 using the retiree's feature vector generated in step S13 as a positive example and the incumbent's feature vector as a negative example. Next, in step S15, the CPU 11, as the learning unit 23, uses the interpretation model 32 to calculate the degree of contribution of each feature included in the feature vector to the prediction result of the prediction model 31, based on the learned prediction model 31. is generated, and the learning process ends.
 図16は、退職予測処理装置10による予測処理の流れを示すフローチャートである。CPU11がROM12又はストレージ14から退職予測プログラムを読み出して、RAM13に展開して実行することにより、予測処理が行なわれる。なお、予測処理は、本発明の退職予測方法の一例である。 FIG. 16 is a flowchart showing the flow of prediction processing by the retirement prediction processing device 10. The prediction process is performed by the CPU 11 reading out the retirement prediction program from the ROM 12 or the storage 14, loading it into the RAM 13, and executing it. Note that the prediction process is an example of the retirement prediction method of the present invention.
 ステップS21で、CPU11は、生成部41として、退職予測処理装置10に入力された予測対象の現職者の人事関連情報を取得する。次に、ステップS22で、CPU11は、生成部41として、予測対象の現職者の人事関連情報から特徴ベクトルを生成する。 In step S21, the CPU 11, as the generation unit 41, acquires the personnel-related information of the current employee to be predicted, which is input to the retirement prediction processing device 10. Next, in step S22, the CPU 11, as the generation unit 41, generates a feature vector from the personnel-related information of the incumbent who is the prediction target.
 次に、ステップS23で、CPU11は、予測部42として、予測モデル31に予測対象の現職者の特徴ベクトルを入力することで、予測対象の現職者の将来の複数の時点における退職確率を予測する。 Next, in step S23, the CPU 11, as the prediction unit 42, predicts the retirement probability of the incumbent to be predicted at multiple points in the future by inputting the feature vector of the incumbent to be predicted to the prediction model 31. .
 次に、ステップS24で、CPU11は、解釈部43として、予測モデル31と、予測対象の現職者及び退職者の各々の特徴ベクトルと、解釈モデル32とを用いて、特徴ベクトルに含まれる各特徴量の予測結果に対する貢献度を算出する。次に、ステップS25で、CPU11は、解釈部43として、対応表33を参照して、予測対象の現職者の特徴量のうち、貢献度の高い特徴量とその特徴量の値との組合せに対応する解釈用の自然言語及び退職予測理由を取得する。 Next, in step S24, the CPU 11, as the interpretation unit 43, uses the prediction model 31, the feature vectors of each of the current and retired persons to be predicted, and the interpretation model 32 to calculate each feature included in the feature vector. Calculate the degree of contribution to the predicted amount. Next, in step S25, the CPU 11, as the interpreter 43, refers to the correspondence table 33 and selects a combination of a feature with a high degree of contribution and a value of that feature among the features of the incumbent to be predicted. Obtain the corresponding interpretation natural language and retirement prediction reasons.
 次に、ステップS26で、CPU11は、解釈部43として、複数の退職者の中から、退職者の各特徴量の貢献度と、予測対象の現職者の各特徴量の貢献度との類似度が所定値以上、又は類似度が高い順に上位所定数の退職者を抽出する。次に、ステップS27で、CPU11は、解釈部43として、抽出した退職者について、退職者DB52に記憶されている退職理由を取得する。 Next, in step S26, the CPU 11, as the interpreter 43, determines the degree of similarity between the degree of contribution of each feature of the retiree and the degree of contribution of each feature of the incumbent to be predicted, from among the plurality of retirees. is greater than a predetermined value, or a predetermined number of top retirees are extracted in descending order of similarity. Next, in step S27, the CPU 11, as the interpreter 43, acquires the reason for retirement stored in the retiree DB 52 for the extracted retiree.
 次に、ステップS28で、CPU11は、解釈部43として、第1期間後の退職確率が所定値以上の現職者について、上記ステップS23でよそこうした退職確率を含む予測結果リストを作成する。また、CPU11は、解釈部43として、上記ステップS25で取得した予測根拠を示す自然言語及び退職予測理由と、上記ステップS26で抽出した退職者の個人コード及び上記ステップS27で取得した退職者の退職理由を予測結果リストに含める。CPU11は、解釈部43として、作成した予測結果リストを出力し、予測処理は終了する。 Next, in step S28, the CPU 11, as the interpreter 43, creates a prediction result list including the retirement probabilities of incumbents whose retirement probabilities after the first period are greater than or equal to a predetermined value in step S23. Further, the CPU 11, as the interpreter 43, uses the natural language indicating the basis for prediction and the retirement prediction reason acquired in step S25, the personal code of the retiree extracted in step S26, and the retirement of the retiree acquired in step S27. Include the reason in the prediction results list. The CPU 11, as the interpreter 43, outputs the created prediction result list, and the prediction process ends.
 以上説明したように、本実施形態に係る退職予測処理装置の退職予測部は、現職者及び退職者の各々の所定期間分の人事関連情報から複数の特徴量を含む特徴ベクトルを生成し、複数の退職者の各々について生成された特徴ベクトルを正例、複数の現職者の各々について生成された特徴ベクトルを負例として学習された予測モデルであって、現職者についての特徴ベクトルが入力された場合に、将来の複数の時点における現職者が退職する可能性を示す退職確率を予測する予測モデルと、予測対象の現職者についての特徴ベクトルとを用いて、予測対象の現職者の将来の複数の時点における退職確率を予測する。これにより、適切なタイミングで退職防止策を打つことを支援するための退職予測を行うことができる。 As explained above, the retirement prediction unit of the retirement prediction processing device according to the present embodiment generates a feature vector including a plurality of feature quantities from the personnel-related information for a predetermined period of each of the current employee and the retiree, and This is a predictive model that is trained using the feature vectors generated for each of the retirees as positive examples and the feature vectors generated for each of the multiple incumbents as negative examples, and the feature vectors for the incumbents are input. In this case, a prediction model that predicts the retirement probability indicating the likelihood of an incumbent retiring at multiple points in the future, and a feature vector about the incumbent to be predicted, are used to Predict the probability of retirement at the point in time. This makes it possible to predict retirement in order to support taking measures to prevent retirement at an appropriate time.
 また、本実施形態に係る退職予測処理装置の退職予測部によれば、図17のAに示すように、学習時には現職であった社員の一部について、予測時には、Nヶ月後に退職可能性ありと予測される場合がある。また、図17のBに示すように、学習時には在籍していなかった現職者についても、退職確率及び退職理由の予測が行われる。 Further, according to the retirement prediction unit of the retirement prediction processing device according to the present embodiment, as shown in A of FIG. It may be predicted that Furthermore, as shown in B of FIG. 17, predictions of retirement probabilities and reasons for retirement are also performed for incumbents who were not employed at the time of learning.
 また、退職予測部は、将来の複数の時点における退職確率を、退職確率の時系列変化を示すグラフとして出力してもよい。これにより、退職防止策を打つタイミングを適切に判断することができる。 Additionally, the retirement prediction unit may output the retirement probabilities at multiple points in the future as a graph showing time-series changes in the retirement probabilities. This makes it possible to appropriately judge when to take measures to prevent retirement.
 また、退職予測部は、予測モデルと、予測対象の現職者の特徴ベクトルと、予測モデルの予測結果に対する各特徴量の貢献度を算出する解釈モデルとを用いて、貢献度が所定値以上の特徴量、又は貢献度が高い順に上位所定個の特徴量を、予測モデルが予測結果を予測した根拠として提示してもよい。これにより、退職確率の予測根拠を把握することができ、適切な退職防止策を検討することができる。 In addition, the retirement prediction unit uses a prediction model, a feature vector of the incumbent to be predicted, and an interpretation model that calculates the degree of contribution of each feature to the prediction result of the prediction model. The feature amounts or the predetermined top feature amounts in descending order of contribution may be presented as the basis for the prediction model predicting the prediction result. This makes it possible to understand the basis for predicting the probability of retirement, and to consider appropriate measures to prevent retirement.
 また、退職予測部は、複数の退職者の中から、退職者についての特徴ベクトルと、予測対象の現職者についての特徴ベクトルとの類似度が所定値以上、若しくは類似度が高い順に上位所定数の退職者、又は、退職者についての特徴量の貢献度と、予測対象の現職者についての特徴量の貢献度との類似度が所定値以上、若しくは類似度が高い順に上位所定数の退職者を抽出して提示してもよい。これにより、予測対象の現職者と似た傾向にある退職者の情報を参照して、適切な退職防止策を検討することができる。特に、特徴量の貢献度の類似度を用いた場合には、退職の理由となった特徴量が類似していることが想定されるため、退職理由を適切に想定して、より適切な退職防止策を検討することができる。 In addition, the retirement prediction unit selects from among the plurality of retirees, the degree of similarity between the feature vector of the retiree and the feature vector of the incumbent to be predicted is a predetermined value or more, or retirees, or the degree of similarity between the degree of contribution of the feature amount of the retiree and the degree of contribution of the feature amount of the incumbent person to be predicted is greater than or equal to a predetermined value, or a predetermined number of retirees in the top order of similarity. may be extracted and presented. This makes it possible to consider appropriate measures to prevent retirement by referring to information on retirees who have similar trends to the current employee who is the target of prediction. In particular, when using the similarity of the contribution of features, it is assumed that the features that were the reason for resignation are similar, so it is assumed that the reason for retirement is appropriately assumed and more appropriate retirement is achieved. Prevention measures can be considered.
 また、退職予測部は、抽出した退職者について予め記憶されている退職理由を提示してもよい。これにより、退職理由を適切に想定して、より適切な退職防止策を検討することができる。 Additionally, the retirement prediction unit may present the reason for retirement stored in advance for the extracted retiree. This makes it possible to appropriately assume the reasons for retirement and consider more appropriate retirement prevention measures.
 また、退職予測部は、特徴量と特徴量の値の基準値との組み合わせと、組み合わせを解釈した自然言語によるテキストとを対応付けて記憶した対応表を参照して、提示する特徴量を自然言語に変換して提示してもよい。これにより、人事部員や管理者等の担当者が機械学習に詳しくない場合でも、予測モデルによる予測の根拠を容易に把握することができる。 In addition, the retirement prediction unit refers to a correspondence table in which combinations of feature quantities and reference values of feature quantity values are associated with and stored text in natural language that interprets the combinations, and the feature quantities to be presented are determined in a natural manner. It may be converted into a language and presented. As a result, even if personnel such as human resources personnel or managers are not familiar with machine learning, they can easily understand the basis of predictions made by the predictive model.
 また、退職予測部は、特徴量と特徴量の値の基準値との組み合わせと、組み合わせが表す退職予測理由とがさらに対応付けて記憶された対応表を参照して、提示する特徴量に対応する退職予測理由を提示してもよい。これにより、予測の根拠となる特徴量に応じた対象予測理由を容易に把握することができる。 In addition, the retirement prediction unit corresponds to the presented feature by referring to a correspondence table in which combinations of feature values and reference values of feature values and retirement prediction reasons represented by the combinations are stored in association with each other. You may also present reasons for predicting retirement. Thereby, it is possible to easily understand the reason for the target prediction according to the feature quantity that is the basis of the prediction.
 また、退職予測部は、複数の項目を含む人事関連情報のうち、所定期間において値の変更がない項目、又は値が不定期に変更される項目については、特徴量の値をそのまま、又はカテゴリ変数に変換し、人事関連情報のうち、値が数値、かつ定期的に変更される項目については、直近の所定期間分の値を統計的に処理することにより特徴ベクトルを生成してもよい。これにより、資格や部署等の静的項目からは、過去の経験等を学習可能な特徴量が生成され、勤怠や給与等の統計的項目からは、直近の状況を加味した学習を可能にする特徴量が生成される。 In addition, among personnel-related information that includes multiple items, for items whose values do not change over a predetermined period or whose values change irregularly, the retirement prediction department will leave the value of the feature as is or categorize it. For items of personnel-related information that are converted into variables and whose values are numerical and change regularly, feature vectors may be generated by statistically processing the values for the most recent predetermined period. As a result, from static items such as qualifications and departments, features are generated that can learn past experience, etc., and from statistical items such as attendance and salary, it is possible to learn by taking into account the most recent situation. Features are generated.
 また、本実施形態に係る退職予測処理装置の退職予測学習部は、現職者及び退職者の各々の所定期間分の人事関連情報から複数の特徴量を含む特徴ベクトルを生成し、複数の退職者の各々について生成された特徴ベクトルを正例、複数の現職者の各々について生成された特徴ベクトルを負例として、現職者についての特徴ベクトルが入力された場合に、将来の複数の時点における現職者が退職する可能性を示す退職確率を予測する予測モデルを学習する。これにより、適切なタイミングで退職防止策を打つことを支援するための退職予測を行うことができる予測モデルを生成することができる。 Furthermore, the retirement prediction learning unit of the retirement prediction processing device according to the present embodiment generates a feature vector including a plurality of feature quantities from the personnel-related information for a predetermined period of time for each of the current employees and retirees, and The feature vectors generated for each of the incumbents are taken as positive examples, and the feature vectors generated for each of the multiple incumbents are taken as negative examples.If the feature vectors for the incumbents are input, the incumbents at multiple future points in time are Learn a predictive model that predicts the probability of retirement. This makes it possible to generate a prediction model that can predict retirement to support taking measures to prevent retirement at an appropriate time.
 また、退職予測学習部は、現時点から第1期間後の退職確率を予測する予測モデルを学習するための特徴量の生成に用いる人事関連情報として、退職者については、退職時点から遡って第1期間分前の時点以前の人事関連情報を抽出し、現職者については、現時点から遡って第1期間分前の時点からさらに遡って第2期間分前の時点以前の人事関連情報を抽出し、第2期間として複数の期間を設定し、抽出された人事関連情報を所定期間分の人事関連情報として用いて特徴ベクトルを生成してもよい。これにより、全ての現職者に共通する項目の値に基づいて、同一の特徴量が生成され、これがリーク特徴量となることを抑制することができる。 In addition, the retirement prediction learning unit uses personnel-related information to be used for generating feature quantities for learning a prediction model that predicts the probability of retirement after a first period from the present time. Extract personnel-related information prior to a time period, and for the current employee, go back from the current time and extract personnel-related information prior to a second period prior to the first period. A feature vector may be generated by setting a plurality of periods as the second period and using the extracted personnel-related information as personnel-related information for a predetermined period. Thereby, the same feature amount is generated based on the values of items common to all incumbents, and it is possible to prevent this from becoming a leak feature amount.
 ここで、図18に、本実施形態における予測モデルの予測精度の検証結果の一例を示す。図18では、6ヶ月後の退職確率を予測する予測モデルについて、現職者の雇用区分毎の再現率及び適合率を検証結果として示している。なお、図18のA~Hは、アルバイトや正社員等の雇用区分を表すデータをマスクしたものである。雇用区分毎に差はあるが、特定の雇用区分では高い再現率が得られている。最も再現率が低い雇用区分Gにおいても、およそ4人に1人は退職兆候がある現職者を見つけられることが分かる。 Here, FIG. 18 shows an example of the verification results of the prediction accuracy of the prediction model in this embodiment. FIG. 18 shows the recall rate and precision rate for each employment category of incumbents as verification results for a prediction model that predicts the probability of retirement six months later. Note that A to H in FIG. 18 mask data representing employment categories such as part-time workers and full-time employees. Although there are differences by employment category, high recall rates are obtained for specific employment categories. It can be seen that even in employment category G, which has the lowest recall rate, approximately one in four incumbents can be found showing signs of retiring.
<変形例>
 上記実施形態では、Nヶ月後の退職確率を予測する場合について、Nが0より大きい整数の場合を例に説明したが、N=1.5等、任意の数を設定可能である。
<Modified example>
In the above embodiment, the case where N is an integer larger than 0 has been described as an example of predicting the probability of retirement after N months, but it is possible to set an arbitrary number such as N=1.5.
 また、上記実施形態では、予測対象の現職者と類似する退職者を抽出する際に、特徴量の貢献度の類似度を用いる場合について説明したが、図19に示すように、特徴量ベクトルの類似度を用いてもよい。 Furthermore, in the above embodiment, when extracting retirees who are similar to the current employee who is the prediction target, a case has been described in which the similarity of the contribution of the feature amount is used, but as shown in FIG. Similarity may also be used.
 また、上記実施形態では、予測モデルと解釈モデルとを個別に設ける場合について説明したが、LightGBM等の、特徴量の重要度を算出可能な機械学習モデルを予測モデルとして用いてもよい。 Furthermore, in the above embodiment, a case has been described in which a prediction model and an interpretation model are provided separately, but a machine learning model such as LightGBM that can calculate the importance of a feature amount may be used as a prediction model.
 また、上記実施形態では、退職確率と、退職予測理由等、予測の根拠や予測結果を解釈した情報とを合わせて出力する場合について説明したが、退職確率のみを出力するようにしてもよい。 Further, in the above embodiment, a case has been described in which the retirement probability and information on the basis of the prediction and the interpretation of the prediction result, such as the reason for predicting retirement, are output together, but only the retirement probability may be output.
 また、上記実施形態では、退職予測学習部と退職予測部とが1つのコンピュータ(退職予測処理装置)で実現される場合について説明したが、それぞれ異なるコンピュータで実現するようにしてもよい。 Furthermore, in the above embodiment, the case where the retirement prediction learning section and the retirement prediction section are realized by one computer (retirement prediction processing device) has been described, but they may be realized by different computers.
 また、上記実施形態でCPUがソフトウェア(プログラム)を読み込んで実行した退職予測学習処理及び退職予測処理を、CPU以外の各種のプロセッサが実行してもよい。この場合のプロセッサとしては、FPGA(Field-Programmable Gate Array)等の製造後に回路構成を変更可能なPLD(Programmable Logic Device)、及びASIC(Application Specific Integrated Circuit)等の特定の処理を実行させるために専用に設計された回路構成を有するプロセッサである専用電気回路等が例示される。また、退職予測学習処理及び退職予測処理を、これらの各種のプロセッサのうちの1つで実行してもよいし、同種又は異種の2つ以上のプロセッサの組み合わせ(例えば、複数のFPGA、及びCPUとFPGAとの組み合わせ等)で実行してもよい。また、これらの各種のプロセッサのハードウェア的な構造は、より具体的には、半導体素子等の回路素子を組み合わせた電気回路である。 Furthermore, various processors other than the CPU may execute the retirement prediction learning process and the retirement prediction process that the CPU reads and executes the software (program) in the above embodiments. The processor in this case is a PLD (Programmable Logic Device) whose circuit configuration can be changed after manufacturing, such as an FPGA (Field-Programmable Gate Array), and an ASIC (Application Specific Intel). In order to execute specific processing such as egrated circuit) An example is a dedicated electric circuit that is a processor having a specially designed circuit configuration. Further, the retirement prediction learning process and the retirement prediction process may be executed by one of these various processors, or by a combination of two or more processors of the same type or different types (for example, multiple FPGAs and CPUs). and FPGA). Further, the hardware structure of these various processors is, more specifically, an electric circuit that is a combination of circuit elements such as semiconductor elements.
 また、上記実施形態では、退職予測学習プログラム及び退職予測プログラムがROM12又はストレージ14に予め記憶(インストール)されている態様を説明したが、これに限定されない。プログラムは、CD-ROM(Compact Disk Read Only Memory)、DVD-ROM(Digital Versatile Disk Read Only Memory)、及びUSB(Universal Serial Bus)メモリ等の非一時的(non-transitory)記憶媒体に記憶された形態で提供されてもよい。また、プログラムは、ネットワークを介して外部装置からダウンロードされる形態としてもよい。 Furthermore, in the above embodiment, a mode has been described in which the retirement prediction learning program and the retirement prediction program are stored (installed) in the ROM 12 or the storage 14 in advance, but the present invention is not limited to this. The program can be installed on CD-ROM (Compact Disk Read Only Memory), DVD-ROM (Digital Versatile Disk Read Only Memory), and USB (Universal Serial Bus) stored in a non-transitory storage medium such as memory It may be provided in the form of Further, the program may be downloaded from an external device via a network.
 以上の実施形態に関し、更に以下の付記を開示する。 Regarding the above embodiments, the following additional notes are further disclosed.
 (付記項1)
 メモリと、
 前記メモリに接続された少なくとも1つのプロセッサと、
 を含み、
 前記プロセッサは、
 現職者及び退職者の各々の所定期間分の人事関連情報から複数の特徴量を含む特徴ベクトルを生成し、
 複数の退職者の各々について生成された特徴ベクトルを正例、複数の現職者の各々について生成された特徴ベクトルを負例として学習された予測モデルであって、現職者についての特徴ベクトルが入力された場合に、将来の複数の時点における前記現職者が退職する可能性を示す退職確率を予測する前記予測モデルと、予測対象の現職者についての特徴ベクトルとを用いて、前記予測対象の現職者の前記将来の複数の時点における退職確率を予測する
 ように構成されている退職予測装置。
(Additional note 1)
memory and
at least one processor connected to the memory;
including;
The processor includes:
Generate a feature vector including multiple feature quantities from personnel-related information for a predetermined period of time for each of current and retired employees,
A predictive model that is trained using feature vectors generated for each of a plurality of retirees as a positive example and feature vectors generated for each of a plurality of incumbents as a negative example, and the feature vector for the incumbent is input. In the case that A retirement prediction device configured to predict retirement probabilities at the plurality of future points in time.
 (付記項2)
 メモリと、
 前記メモリに接続された少なくとも1つのプロセッサと、
 を含み、
 前記プロセッサは、
 現職者及び退職者の各々の所定期間分の人事関連情報から複数の特徴量を含む特徴ベクトルを生成し、
 複数の退職者の各々について生成された特徴ベクトルを正例、複数の現職者の各々について生成された特徴ベクトルを負例として、現職者についての特徴ベクトルが入力された場合に、将来の複数の時点における前記現職者が退職する可能性を示す退職確率を予測する予測モデルを学習する
 ように構成されている退職予測学習装置。
(Additional note 2)
memory and
at least one processor connected to the memory;
including;
The processor includes:
Generate a feature vector including multiple feature quantities from personnel-related information for a predetermined period of time for each of current and retired employees,
The feature vectors generated for each of multiple retirees are taken as positive examples, and the feature vectors generated for each of multiple incumbents are taken as negative examples. When the feature vectors for incumbents are input, future multiple A retirement prediction learning device configured to learn a prediction model that predicts a retirement probability indicating the possibility that the current employee will retire at a point in time.
 (付記項3)
 退職予測処理を実行するようにコンピュータによって実行可能なプログラムを記憶した非一時的記録媒体であって、
 前記退職予測処理は、
 現職者及び退職者の各々の所定期間分の人事関連情報から複数の特徴量を含む特徴ベクトルを生成し、
 複数の退職者の各々について生成された特徴ベクトルを正例、複数の現職者の各々について生成された特徴ベクトルを負例として学習された予測モデルであって、現職者についての特徴ベクトルが入力された場合に、将来の複数の時点における前記現職者が退職する可能性を示す退職確率を予測する前記予測モデルと、予測対象の現職者についての特徴ベクトルとを用いて、前記予測対象の現職者の前記将来の複数の時点における退職確率を予測する
 ことを含む非一時的記録媒体。
(Additional note 3)
A non-temporary recording medium storing a program executable by a computer to execute retirement prediction processing,
The retirement prediction process includes:
Generate a feature vector including multiple feature quantities from personnel-related information for a predetermined period of time for each of current and retired employees,
A predictive model that is trained using feature vectors generated for each of a plurality of retirees as a positive example and feature vectors generated for each of a plurality of incumbents as a negative example, and the feature vector for the incumbent is input. In this case, the prediction model predicts the retirement probability indicating the possibility that the incumbent will retire at multiple points in the future, and the feature vector for the prediction target incumbent is used to predict the prediction target incumbent. predicting retirement probabilities at said plurality of future points in time.
 (付記項4)
 退職予測学習処理を実行するようにコンピュータによって実行可能なプログラムを記憶した非一時的記録媒体であって、
 前記退職予測学習処理は、
 現職者及び退職者の各々の所定期間分の人事関連情報から複数の特徴量を含む特徴ベクトルを生成し、
 複数の退職者の各々について生成された特徴ベクトルを正例、複数の現職者の各々について生成された特徴ベクトルを負例として、現職者についての特徴ベクトルが入力された場合に、将来の複数の時点における前記現職者が退職する可能性を示す退職確率を予測する予測モデルを学習する
 ことを含む非一時的記録媒体。
(Additional note 4)
A non-temporary recording medium storing a program executable by a computer to execute a retirement prediction learning process,
The retirement prediction learning process includes:
Generate a feature vector including multiple feature quantities from personnel-related information for a predetermined period of time for each of current and retired employees,
The feature vectors generated for each of multiple retirees are taken as positive examples, and the feature vectors generated for each of multiple incumbents are taken as negative examples. When the feature vectors for incumbents are input, future multiple A non-temporary recording medium comprising learning a predictive model that predicts a retirement probability indicating the possibility that the incumbent will retire at a point in time.
10   退職予測処理装置
11   CPU
12   ROM
13   RAM
14   ストレージ
15   入力部
16   表示部
17   通信I/F
19   バス
20   退職予測学習部
21   フィルタ部
22   生成部
23   学習部
31   予測モデル
32   解釈モデル
33   対応表
40   退職予測部
41   生成部
42   予測部
43   解釈部
51   現職者DB
52   退職者DB
10 Retirement prediction processing device 11 CPU
12 ROM
13 RAM
14 Storage 15 Input section 16 Display section 17 Communication I/F
19 Bus 20 Retirement prediction learning section 21 Filter section 22 Generation section 23 Learning section 31 Prediction model 32 Interpretation model 33 Correspondence table 40 Retirement prediction section 41 Generation section 42 Prediction section 43 Interpretation section 51 Incumbent DB
52 Retirement DB

Claims (8)

  1.  現職者及び退職者の各々の所定期間分の人事関連情報から複数の特徴量を含む特徴ベクトルを生成する生成部と、
     複数の退職者の各々について生成された特徴ベクトルを正例、複数の現職者の各々について生成された特徴ベクトルを負例として学習された予測モデルであって、現職者についての特徴ベクトルが入力された場合に、将来の複数の時点における前記現職者が退職する可能性を示す退職確率を予測する前記予測モデルと、予測対象の現職者についての特徴ベクトルとを用いて、前記予測対象の現職者の前記将来の複数の時点における退職確率を予測する予測部と、
     を含む退職予測装置。
    a generation unit that generates a feature vector including a plurality of feature quantities from personnel-related information for a predetermined period of time for each of the current and retired employees;
    A predictive model that is trained using feature vectors generated for each of a plurality of retirees as a positive example and feature vectors generated for each of a plurality of incumbents as a negative example, and the feature vector for the incumbent is input. In the case that a prediction unit that predicts the probability of retirement at multiple points in the future;
    Retirement prediction device including.
  2.  前記予測モデルと、前記予測対象の現職者の特徴ベクトルと、前記予測モデルの予測結果に対する各特徴量の貢献度を算出する解釈モデルとを用いて、前記貢献度が所定値以上の特徴量、又は貢献度が高い順に上位所定個の特徴量を、前記予測モデルが前記予測結果を予測した根拠として提示する解釈部を含む請求項1に記載の退職予測装置。 Using the prediction model, the feature vector of the incumbent to be predicted, and an interpretation model that calculates the degree of contribution of each feature to the prediction result of the prediction model, a feature whose degree of contribution is equal to or greater than a predetermined value; 2. The retirement prediction device according to claim 1, further comprising an interpretation unit that presents a predetermined number of top feature quantities in descending order of contribution as basis for the prediction model predicting the prediction result.
  3.  前記予測モデルと、前記複数の退職者の各々の特徴ベクトルと、前記予測対象の現職者の特徴ベクトルと、前記予測モデルの予測結果に対する各特徴量の貢献度を算出する解釈モデルとを用いて、前記複数の退職者の中から、前記退職者の特徴量の貢献度と、前記予測対象の現職者の特徴量の貢献度との類似度が所定値以上、又は類似度が高い順に上位所定数の退職者を抽出して提示する解釈部を含む請求項1に記載の退職予測装置。 Using the prediction model, the feature vector of each of the plurality of retirees, the feature vector of the incumbent who is the prediction target, and an interpretation model that calculates the degree of contribution of each feature to the prediction result of the prediction model. , from among the plurality of retirees, the degree of similarity between the degree of contribution of the feature amount of the retiree and the degree of contribution of the feature amount of the incumbent person to be predicted is a predetermined value or more, or a top predetermined person is selected in descending order of similarity. The retirement prediction device according to claim 1, further comprising an interpreter that extracts and presents a number of retirees.
  4.  現職者及び退職者の各々の所定期間分の人事関連情報から複数の特徴量を含む特徴ベクトルを生成する生成部と、
     複数の退職者の各々について生成された特徴ベクトルを正例、複数の現職者の各々について生成された特徴ベクトルを負例として、現職者についての特徴ベクトルが入力された場合に、将来の複数の時点における前記現職者が退職する可能性を示す退職確率を予測する予測モデルを学習する学習部と、
     を含む退職予測学習装置。
    a generation unit that generates a feature vector including a plurality of feature quantities from personnel-related information for a predetermined period of time for each of the current and retired employees;
    The feature vectors generated for each of multiple retirees are taken as positive examples, and the feature vectors generated for each of multiple incumbents are taken as negative examples. When the feature vectors for incumbents are input, future multiple a learning unit that learns a prediction model that predicts a retirement probability indicating the possibility that the incumbent will retire at a point in time;
    Retirement prediction learning device including.
  5.  現時点から第1期間後の退職確率を予測する予測モデルを学習するための特徴量の生成に用いる人事関連情報として、前記退職者については、退職時点から遡って前記第1期間分前の時点以前の人事関連情報を抽出し、前記現職者については、現時点から遡って前記第1期間分前の時点からさらに遡って第2期間分前の時点以前の人事関連情報を抽出し、前記第2期間として複数の期間を設定するフィルタ部を含み、
     前記生成部は、前記フィルタ部により抽出された人事関連情報を前記所定期間分の人事関連情報として用いて前記特徴ベクトルを生成する
     請求項4に記載の退職予測学習装置。
    As personnel-related information used to generate features for learning a predictive model that predicts the probability of retirement after a first period from the current time, for the retiree, the information is collected before the first period, retroactive from the point of retirement. For the current employee, extract the personnel-related information from the time before the first period going back from the present time and extracting the personnel-related information from the time before the second period. includes a filter section that sets multiple periods as
    The retirement prediction learning device according to claim 4, wherein the generation unit generates the feature vector using the personnel-related information extracted by the filter unit as the personnel-related information for the predetermined period.
  6.  生成部が、現職者及び退職者の各々の所定期間分の人事関連情報から複数の特徴量を含む特徴ベクトルを生成し、
     予測部が、複数の退職者の各々について生成された特徴ベクトルを正例、複数の現職者の各々について生成された特徴ベクトルを負例として学習された予測モデルであって、現職者についての特徴ベクトルが入力された場合に、将来の複数の時点における前記現職者が退職する可能性を示す退職確率を予測する前記予測モデルと、予測対象の現職者についての特徴ベクトルとを用いて、前記予測対象の現職者の前記将来の複数の時点における退職確率を予測する
     退職予測方法。
    a generation unit generates a feature vector including a plurality of feature quantities from personnel-related information for a predetermined period of each of the current employee and the retired employee;
    A prediction model in which a prediction unit is trained using feature vectors generated for each of a plurality of retirees as positive examples and feature vectors generated for each of a plurality of incumbents as a negative example, the prediction unit learning features for the incumbent. When a vector is input, the prediction is made using the prediction model that predicts the retirement probability indicating the possibility that the incumbent will retire at multiple points in the future, and the feature vector for the incumbent to be predicted. A retirement prediction method that predicts the retirement probability of a target incumbent at multiple points in the future.
  7.  生成部が、現職者及び退職者の各々の所定期間分の人事関連情報から複数の特徴量を含む特徴ベクトルを生成し、
     学習部が、複数の退職者の各々について生成された特徴ベクトルを正例、複数の現職者の各々について生成された特徴ベクトルを負例として、現職者についての特徴ベクトルが入力された場合に、将来の複数の時点における前記現職者が退職する可能性を示す退職確率を予測する予測モデルを学習する
     退職予測学習方法。
    a generation unit generates a feature vector including a plurality of feature quantities from personnel-related information for a predetermined period of each of the current employee and the retired employee;
    When the learning unit inputs the feature vectors for the incumbent, using the feature vectors generated for each of the multiple retirees as a positive example and the feature vectors generated for each of the multiple incumbents as a negative example, A retirement prediction learning method that learns a prediction model that predicts a retirement probability indicating the possibility that the incumbent will retire at multiple points in the future.
  8.  コンピュータを、請求項1~請求項3のいずれか1項に記載の退職予測装置を構成する各部、又は、請求項4若しくは請求項5に記載の退職予測学習装置を構成する各部として機能させるためのプログラム。 To cause a computer to function as each part constituting the retirement prediction device according to any one of claims 1 to 3, or each part constituting the retirement prediction learning device according to claim 4 or claim 5. program.
PCT/JP2022/029396 2022-07-29 2022-07-29 Employee resignation prediction device, employee resignation prediction learning device, method, and program WO2024024116A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/029396 WO2024024116A1 (en) 2022-07-29 2022-07-29 Employee resignation prediction device, employee resignation prediction learning device, method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/029396 WO2024024116A1 (en) 2022-07-29 2022-07-29 Employee resignation prediction device, employee resignation prediction learning device, method, and program

Publications (1)

Publication Number Publication Date
WO2024024116A1 true WO2024024116A1 (en) 2024-02-01

Family

ID=89705947

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/029396 WO2024024116A1 (en) 2022-07-29 2022-07-29 Employee resignation prediction device, employee resignation prediction learning device, method, and program

Country Status (1)

Country Link
WO (1) WO2024024116A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016035336A1 (en) * 2014-09-03 2016-03-10 日本電気株式会社 Leave of absence prediction system, prediction rule learning device, prediction device, leave of absence prediction method, and computer-readable recording medium
JP2020064343A (en) * 2018-10-15 2020-04-23 株式会社ニッセイコム Job turnover factor presentation apparatus, job turnover factor presentation method and job turnover factor presentation program
JP2021140506A (en) * 2020-03-05 2021-09-16 富士通株式会社 Determination program, determination device, and generation method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016035336A1 (en) * 2014-09-03 2016-03-10 日本電気株式会社 Leave of absence prediction system, prediction rule learning device, prediction device, leave of absence prediction method, and computer-readable recording medium
JP2020064343A (en) * 2018-10-15 2020-04-23 株式会社ニッセイコム Job turnover factor presentation apparatus, job turnover factor presentation method and job turnover factor presentation program
JP2021140506A (en) * 2020-03-05 2021-09-16 富士通株式会社 Determination program, determination device, and generation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "Retirement prediction AI service/NTT HumanEX", NTT EXCPARTNER CORPORATION, 25 May 2022 (2022-05-25), XP093133057, Retrieved from the Internet <URL:https://valuepartner.ntt-ba.co.jp/solution/engagement/> [retrieved on 20240220] *

Similar Documents

Publication Publication Date Title
McGee How the perception of control influences unemployed job search
Bliese et al. Understanding relative and absolute change in discontinuous growth models: Coding alternatives and implications for hypothesis testing
Schneider et al. Socioeconomic variation in the effect of economic conditions on marriage and nonmarital fertility in the United States: Evidence from the Great Recession
Hurd et al. Subjective survival curves and life cycle behavior
Burkhauser et al. Dynamic programming model estimates of Social Security Disability Insurance application timing
Hwang et al. The conditional relationship between English language proficiency and earnings among US immigrants
JP2020064343A (en) Job turnover factor presentation apparatus, job turnover factor presentation method and job turnover factor presentation program
Perales Modeling the consequences of the transition to parenthood: Applications of panel regression methods
Jafari et al. Predictive analytics approach to evaluate wage inequality in engineering organizations
Schwartz et al. Enhancing the accuracy of revenue management system forecasts: The impact of machine and human learning on the effectiveness of hotel occupancy forecast combinations across multiple forecasting horizons
JP2020035309A (en) Information processor and program
Hook et al. Occupational characteristics and parents' childcare time
Lee et al. Are the self-employed more stressed? New evidence on an old question
Daniel et al. The Impact Of Organizational Culture And Job Performance To Organizational Commitment And Employees Job Performance
Uhrich et al. Smarts or trait emotional intelligence? The role of trait emotional intelligence in enhancing the relationship between cognitive ability and performance.
Van der Baan et al. Employability competences of workers in health care and finance. The role of self‐directed learning orientation and job characteristics
Werner et al. Can a machine learn through customer sentiment?: A cost-aware approach to predict support ticket escalations
Lessem et al. Immigrant Wage Growth in the United States: The Role of Occupational Upgrading
WO2024024116A1 (en) Employee resignation prediction device, employee resignation prediction learning device, method, and program
Siddique et al. Critical Appraisal for Racial and Ethnic Equity in Clinical Prediction Models Extension: Development of a Critical Appraisal Tool Extension to Assess Racial and Ethnic Equity-Related Risk of Bias for Clinical Prediction Models
Brenner et al. Is there a life cycle in all industries? First evidence from industry size dynamics in West Germany
Coffey Bayesian methods for prediction of survey data collection parameters in adaptive and responsive designs
Madsen et al. Education–occupation mismatch and long-term sickness absence: a longitudinal study of over-and under-education using Norwegian population data, 2003–2013
Lachowska Outside options and wages: What can we learn from subjective assessments?
Alexander et al. Knowledge integration for predicting schedule delays in software integration

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22953204

Country of ref document: EP

Kind code of ref document: A1