EP3859588A2 - Method, apparatus, device and medium for generating recruitment position description text - Google Patents

Method, apparatus, device and medium for generating recruitment position description text Download PDF

Info

Publication number
EP3859588A2
EP3859588A2 EP21165688.9A EP21165688A EP3859588A2 EP 3859588 A2 EP3859588 A2 EP 3859588A2 EP 21165688 A EP21165688 A EP 21165688A EP 3859588 A2 EP3859588 A2 EP 3859588A2
Authority
EP
European Patent Office
Prior art keywords
sub
module
text
subject
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21165688.9A
Other languages
German (de)
French (fr)
Other versions
EP3859588A3 (en
Inventor
Chuan Qin
Kaichun Yao
Hengshu Zhu
Chao Ma
Dazhong Shen
Tong Xu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Original Assignee
Baidu Online Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baidu Online Network Technology Beijing Co Ltd filed Critical Baidu Online Network Technology Beijing Co Ltd
Publication of EP3859588A2 publication Critical patent/EP3859588A2/en
Publication of EP3859588A3 publication Critical patent/EP3859588A3/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/55Rule-based translation
    • G06F40/56Natural language generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06311Scheduling, planning or task assignment for a person or group
    • G06Q10/063112Skill-based matching of a person or a group to a task
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/105Human resources
    • G06Q10/1053Employment or hiring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions

Definitions

  • Embodiments of the present disclosure relate to computer technologies, and more particularly, to the technical field of artificial intelligence, and more particularly, to a method, apparatus, device, and medium for generating a recruitment position description text.
  • a recruitment position description shows the responsibilities of a position and skill requirements, and an efficient position description will help the employer to find the right person for the position and provide the candidate with a clear understanding of the responsibilities and qualifications for the particular position.
  • Embodiments of the present disclosure provide a method, apparatus, device and medium for generating a recruitment position description text, to accurately describe a recruitment position and improve the efficiency of generating a position description.
  • a method of generating a recruitment position description text including:
  • an apparatus for generating a recruitment position description text including:
  • an electronic device including:
  • a non-transitory computer-readable storage medium having stored thereon computer instructions for causing the computer to perform a method of generating a recruitment post description text as described in any of the embodiments of the present disclosure.
  • a target recruitment position description text may be automatically and quickly generated, and the generated target recruitment position description text may be matched with the requirements of the target position, thereby improving the generation efficiency and accuracy of the recruitment position description text, and further reducing the human resource and time of the recruitment process, and improving the recruitment efficiency.
  • FIG. 1 is an illustrative flow chart of a method for generating a recruitment position description text according to an embodiment of the present disclosure, which is used for automatically generating a recruitment position description text.
  • the method may be performed by a device for generating a recruitment position description text, which may be implemented in software and/or hardware, and may be integrated into an electronic device with computing-capability.
  • a method for generating the recruitment position description text according to the present embodiment may include following steps.
  • S110 includes obtaining an original text related to a target position.
  • the original text related to the target position collected in advance by the staff is obtained.
  • the original text includes at least one of a resume text of a person who has been determined to meet the position requirement, a text containing position responsibility data, and a text containing project data related to the position.
  • the resume text of the person who has determined to meet the position requirements may include the resume text of a person who has already enrolled and the resume text of a person who has passed the review and to be enrolled.
  • a staff collects the resume text of an enrolled person and the resume text of a person who has passed a review and is about to be enrolled in advance, and collects responsibility data of different positions as text containing position responsibility data, and collects project or engineering data related to different positions as the text containing project data related to position.
  • the contents written in the resume of an employee may be that: a professional research direction for an employee is an intelligent robot, and the content written in the text containing the project data related to position may be that: a target position project refers to an intelligent robot obstacle avoidance project.
  • S120 includes generating a target recruitment position description text corresponding to the target position based on the original text and a pre-trained deep neural network model.
  • the deep neural network model is a model pre-trained to generate target recruitment position description text.
  • the target recruitment position description text includes description of duties and skills of the target position and the like, which shall be presented to the position seeker.
  • the original text related to the target position is input to the pre-trained deep neural network model, and the related data of the target position is extracted from the original text related to the target position by the deep neural network model.
  • data such as a current position of an enrolled person, a research direction of an enrolled person, and a current project of an enrolled person may be extracted from the resume texts of an enrolled person
  • data such as a position willingness of a to-be-enrolled person and a research direction of the to-be-enrolled person may be extracted from the resume texts of a person that has passed review and will be enrolled
  • data such as a main responsibility, a work task, and a professional requirement of the position may be extracted from the text including position responsibility data
  • data such as a historical project and a current project of the position may be extracted from a text including position data related to project.
  • the deep neural network model After obtaining the original text related to the target position, the deep neural network model generates the target recruitment position description text corresponding to the target position based on the extracted data.
  • One embodiment of the above-mentioned application has the advantage or beneficial effect that the target recruitment position description text may be automatically and quickly generated through the deep neural network, and the generated target recruitment position description text may be matched with the needs of the target position, thereby improving the generation efficiency and accuracy of the recruitment position description text, and further reducing the human resource and time of the recruitment process and improving the recruitment efficiency.
  • FIG. 2 is an illustrative structural diagram of a deep neural network model according to an embodiment of the present disclosure, which may execute the method as discussed in the above.
  • the present embodiment provides a deep neural network model 200 that may include:
  • the target skill subject distribution vector is a skill subject distribution vector of the target position, and the skill subject refers to a category name of a job skill required by the position.
  • the skill subject may include a coding type of skill subject, a machine learning type of skill subject, a big data type of skill subject, and the like.
  • the text subject predicting sub-model 210 obtains the original text related to the target position, and extracts the skill subject data of the target position from the original text related to the target position. For example, the project name of the enrolled person at the target position may be extracted, and the skill subject of the project may be obtained based on the project name.
  • the target skill subject distribution vector may be predicted based on the related data of the target position, thereby determining the skill subject of the target position.
  • the target skill subject distribution vector is transmitted to the description text generating sub-model 220 by the text subject predicting sub-model 210, and the description text generating sub-model 220 generates the target recruitment position description text according to the target skill subject distribution vector, to facilitate text description for the target position.
  • the target position is a software engineer
  • the target skill subject distribution vector of the position is a coding type of skill subject
  • the target recruitment position description text finally generated may be "software engineer: Requires proficient use of JAVA and C + +, and more than three years of working experience.”
  • One embodiment of the above-mentioned application has the advantage or advantageous effect of dividing the deep neural network model into a text subject predicting sub-model and a description text generating sub-model, which reduces the manual operation steps and saves human resource and time, thus realizes separate steps of determining the skill subject and the description text of the target position. Accordingly, the description text is obtained according to the skill subject and thus improve the accuracy and efficiency of the target recruitment position description text.
  • FIG. 3 is an illustrative structural diagram of a deep neural network model according to an embodiment of the present disclosure, which is further optimized on the basis of the above-described embodiment.
  • the deep neural network model 300 provided by the present embodiment may include: a text subject predicting sub-model 310 and a description text generating sub-model 320.
  • the text subject predicting sub-model 310 includes: a bag-of-word feature extraction module for extracting a bag-of-word feature vector of the original text related to the target position; a distribution parameter calculation module for calculating a skill subject vector distribution parameter according to bag-of-word feature vector and non-linear network layer; a first subject distribution determining module for obtaining a target skill subject distribution vector according to the skill subject vector distribution parameter and a pre-set subject distribution hypothesis parameter.
  • the bag-of-word feature extracting module 301 extracts a bag-of-word feature vector from the original text related to the target position, after obtaining the original text related to the target position.
  • the original text related to the target position is "software engineer needs programming basis” and “software engineer needs to be seriousness and down-to-earth,”
  • the bag-of-word feature vector may be expressed as [111100] and [110011].
  • the bag-of-word feature extracting module 301 sends the bag-of-word feature vector to the distribution parameter calculating module 302, and the distribution parameter calculating module 302 calculates the skill subject vector distribution parameter according to the bag-of-word feature vector and the pre-set non-linear network layer.
  • the distribution parameter calculating module 302 sends the skill subject vector distribution parameter to the first subject distribution determining module 303, and the first subject distribution determining module 303 calculates the target skill subject distribution vector according to the skill subject vector distribution parameter and the pre-set subject distribution hypothesis parameter set in advance.
  • the well-organized calculation of the target skill subject distribution vector is realized, the calculation accuracy is improved, the manual operation is reduced, the process of manually determining the skill subject is avoided, and the calculation efficiency of the target skill subject distribution vector is improved.
  • the bag-of-word feature extracting module includes a bag-of-word generating sub-module for generating bag-of-word characterization data of the original text related to the target position; a first fully connected network sub-module for performing feature extraction of the bag-of-word characterization data to obtain a bag-of-word feature vector.
  • the bag-of-word feature extracting module 301 may include a bag-of-word generating sub-module 3011 and a first fully connected network submodule 3012.
  • the first fully connected network sub-module 3012 may include one or more layers of fully connected networks.
  • the bag-of-word generating sub-module 3011 extracts the bag-of-word characterization data in the original text related to the target position, for example, the original text related to the target position is "software engineer needs programming basis” and “software engineer needs to be seriousness and down-to-earth", and the extracted bag-of-word characterization data is "software engineer, need, programing, basis, seriousness, down-to-earth," which is represented as X i bow .
  • the bag-of-word generating sub-module 3011 sends the bag-of-word characterization data to the first fully connected network submodule 3012, and the bag-of-word characterization data may be extracted by the first fully connected network sub-module 3012 for a plurality of times to generate a bag-of-word feature vector, where f e d may represent the fully connected network in the first fully connected network sub-module 3012.
  • the bag-of-word feature vector is generated by the bag-of-word generating sub-module 3011 and the first fully connected network sub-module 3012, so that the calculation accuracy of the bag-of-word feature vector is improved, the automatic extraction of the bag-of-word feature is realized, the manual operation is reduced, and the calculation efficiency of the target skill subject distribution vector is further improved.
  • the distribution parameter calculating module includes a first parameter calculating sub-module for calculating a first skill subject vector distribution sub-parameter according to the bag-of-word feature vector and the first non-linear network layer; a second parameter calculating sub-module for calculating a second skill subject vector distribution sub-parameter based on the bag-of-word feature vector and the second non-linear network layer.
  • the distribution parameter calculating module 302 may include a first parameter calculating sub-module 3021 and a second parameter calculating sub-module 3022.
  • the first parameter calculating sub-module 3021 receives the bag feature vector of the first fully connected network sub-module 3012, and calculates the first skill subject vector distribution sub-parameter based on the pre-set first non-linear network layer.
  • the first non-linear network layer may be represented by f ⁇ d
  • the first skill subject vector distribution sub-parameters may be represented by ⁇ d .
  • the second parameter calculating sub-module 3022 calculates the second skill subject vector distribution sub-parameter according to the pre-set second non-linear network layer, after receiving the bag-of-word feature vector of the first fully connected network sub-module 3012.
  • the second non-linear network layer may be represented by f ⁇ d
  • the second skill subject vector distribution sub-parameters may be represented by ⁇ d .
  • the skill subject vector distribution parameters may include ⁇ d and ⁇ d , and by calculating the ⁇ d and ⁇ d , an accurate calculation of the skill subject vector distribution parameter is realized, thereby improving the calculation efficiency of the target skill subject distribution vector.
  • the first subject distribution determining module includes a third parameter calculation sub-module for calculating a third skill subject vector distribution parameter according to the first skill subject vector distribution sub-parameter and the first pre-set subject distribution hypothesis sub-parameter; a fourth parameter calculating sub-module for calculating a fourth skill subject vector distribution parameter according to the second skill subject vector distribution sub-parameter and the second pre-set subject distribution hypothesis sub-parameter; a first subject vector sampling sub-module for obtaining a first skill subject vector according to a third skill subject vector distribution parameter and a fourth skill subject vector distribution parameter; a second fully connected network sub-module for performing feature extraction on the first skill subject vector to obtain the first subject feature vector; a first subject distribution feature calculating sub-module for obtaining a target skill subject distribution vector based on the first subject feature vector and the first activation function.
  • the first subject distribution determining module 303 may include a third parameter calculating sub-module 3031, a fourth parameter calculating sub-module 3032, a first subject vector sampling sub-module 3033, a second fully connected network sub-module 3034, and a first subject distribution feature sub-module 3035.
  • the third parameter calculation sub-module 3031 receives the first skill subject vector distribution sub-parameter ⁇ d of the first parameter calculating sub-module 3021, and calculates the third skill subject vector distribution parameter according to the pre-defined first pre-set subject distribution hypothesis sub-parameter.
  • the first pre-set subject distribution hypothesis sub-parameter may be represented by W ⁇
  • the second pre-set subject distribution hypothesis sub-parameter may be represented by W ⁇
  • the fourth skill subject vector distribution parameter may be represented by ⁇ s .
  • the first subject vector sampling sub-module 3033 receives the third skill subject vector distribution parameter ⁇ s of the third parameter calculating sub-module 3031 and the fourth skill subject vector distribution parameter ⁇ s of the fourth parameter calculating sub-module 3032, and calculates the first skill subject vector, which may represent the first skill subject vector with z s .
  • the second fully connected network sub-module 3034 may include one or more layers of fully connected networks, where the fully connected networks in the second fully connected network sub-module 3034 may be represented by f ⁇ s .
  • the first subject distribution feature calculating sub-module 3035 receives the first subject feature vector of the second fully connected network sub-module 3034, and obtains the target skill subject distribution vector according to the pre-set first activation function.
  • the first activation function may be represented by softmax ( f ⁇ s )
  • One embodiment of the above application has the advantage or advantage of automatically generating a target skill subject distribution vector by dividing the text subject prediction sub-model into a bag-of-word feature extracting module, a distribution parameter calculating module, and a first subject distribution determining module.
  • FIG. 4 is an illustrative structural diagram of a deep neural network model according to an embodiment of the present disclosure, which is further optimized on the basis of the above-described embodiment.
  • the deep neural network model 400 provided by the present embodiment may include: a text subject predicting sub-model 410 and description text generating sub-model 420.
  • the description text generating sub-model includes an encoder module for generating a sequence of semantic characterization vectors of the current sentence in the original text related to the target position; an attention module for performing weighted transformation on the sequence of the semantic characterization vectors according to the target skill subject distribution vector; a decoder module for predicting a skill subject label of a current sentence according to the weighted and transformed sequence of semantic characterization vectors; and predicting the current word of the target recruitment position description text according to the skill subject label.
  • the description text generating sub-model 420 may include an encoder module 401, an attention module 402, and a decoder module 403.
  • the attention module 402 acquires the target skill subject distribution vector ⁇ s of the first subject distribution characteristic calculating sub-module 3035, and performs weighted transformation on the semantic characterization sequence H according to the target skill subject distribution vector ⁇ s .
  • the decoder module 403 receives the weighted and transformed sequence of semantic characterization vectors, and may use two one-way circular neural networks to model the prediction of the skill subject label of the current sentence, and further obtain the prediction of the current word of the target recruitment position description text according to the skill subject label.
  • the skill subject label may be represented by t j
  • the current word of the target recruitment post description text may be represented by s j,k .
  • the encoder module includes a word vector generating sub-module for generating a word vector of each word included in the current sentence of the original text related to the target position; a first cyclic neural network sub-module for generating a sequence of semantic characterization vectors of the current sentence according to each word vector.
  • the encoder module 401 may include a word vector generating sub-module 4011 and a first cyclic neural network sub-module 4012, and the word vector generating sub-module 4011 may generate a word vector of each word included in the current sentence based on the original text related to the target position, which may be represented by e k d .
  • the first cyclic neural network sub-module 4012 receives the word vector e k d of the word vector generating sub-module 4011, and generates a sequence of semantic characterization vectors H of the current sentence. Accurate calculation of the semantic characterization vector sequence is realized, human resource and time are saved, and generation efficiency of target recruitment position description text is improved.
  • the attention module includes a first attention sub-module and a second attention sub-module.
  • the decoder module includes a subject predicting sub-module and a text generating submodule.
  • the first attention sub-module is configured to perform weighted transformation on the semantic characterization vector sequence according to the target skill subject distribution vector and the hidden layer feature state vector in the subject prediction sub-module to obtain the weighted and transformed first vector sequence.
  • the second attention sub-module is configured to perform weighted transformation on the semantic characterization vector sequence according to the target skill subject distribution vector and the hidden layer feature state vector in the text generation sub-module to obtain a weighted and transformed second vector sequence.
  • the subject predicting sub-module is configured for predicting a skill subject label of the current sentence based on a target skill subject distribution vector and the first vector sequence.
  • the text generating sub-module is configured for predicting a current word in the target recruitment position description text based on the skill subject label of the current sentence and the second vector sequence.
  • the attention module 402 may include a first attention submodule 4021 and a second attention sub-module 4022
  • the decoder module 403 may include a subject predicting sub-module 4031 and a text generating sub-module 4032.
  • the first attention sub-module 4021 acquires the target skill subject distribution vector ⁇ s of the first subject distribution feature sub-module 3035, and performs weighted transformation on the semantic characterization vector sequence H according to the hidden layer feature state vector in the subject predicting sub-module 4031 to obtain the weighted and transformed first vector sequence.
  • the hidden layer feature state vector in the subject predicting sub-module 4031 may be represented by h j t
  • the first vector sequence may be represented by u j t .
  • the second attention sub-module 4022 acquires the target skill subject distribution vector ⁇ s of the first subject distribution feature submodule 3035, and performs weighted transformation on the semantic characterization vector sequence H according to the hidden layer feature state vector in the text generating sub-module 4032 to obtain the weighted and transformed second vector sequence.
  • the hidden layer feature state vector in the text generating sub-module 4032 may be represented by h j , k c
  • the second vector sequence may be represented by u j , k c .
  • the subject predicting sub-module 4031 predicts the skill subject label t j of the current sentence based on the target skill subject distribution vector ⁇ s and the first vector sequence u j t .
  • the text generating sub-module 4032 obtains the skill subject label t j and the second vector sequence u j , k c of the current sentence output by the subject predicting sub-module 4031, and predicts the current word s j,k in the target recruitment position description text.
  • the prediction of the current word of the skill subject label and the target recruitment position description text may be performed using the following formula: p t j
  • t ⁇ j , H , ⁇ s soft max W t h j t u j t ⁇ s + b t p y j , k
  • y ⁇ j , y j , ⁇ k , H , ⁇ s , t j soft max W c h j , k c u j , k c ⁇ s + b c
  • t ⁇ j , H, ⁇ s ) represents the prediction probability of the skill subject label
  • y ⁇ j , y j, ⁇ k , H, ⁇ s , t j ) represents the prediction probability of the current word of the target recruitment position description text
  • W t , W c , b t and b c . are pre-set parameters.
  • N refers to the number of sentences of the target recruitment position description text
  • M j c refers to the number of words in the j th sentence
  • h l d refers to the semantic characterization vector of the word in the 1 th sentence
  • g l , j t , g p , j t , g j , j , k c and g p , j , k c are vectors of network intermediate layer.
  • W ⁇ t , W ⁇ c v ⁇ t T , v ⁇ c T , b ⁇ t and b ⁇ c are pre-set parameters.
  • the calculation of g p , j t and g p , j , k c are similar to the calculation methods of g l , j t and g p , j , k c described above respectively, the formula of g l , j t in which / is replaced by p is the calculation formula of g p , j t , , and the formula of g l , j , k c in which / is replaced by p is the calculation formula of g p , j , k c . . p and / are arguments of the accumulation function, whose values are selected in [1, N].
  • t j -1 represents the subject label of the previous sentence, i.e., the (j-1)th sentence
  • M j ⁇ 1 s represents the number of words in the (j-1)th sentence in the target recruitment position description text
  • k s represents the number of subjects
  • ⁇ * , s j ⁇ 1 , k s is a vector expression of the previous sentence by the first skill subject word distribution parameter.
  • the attention module 402 and the decoder module 403 By dividing the attention module 402 and the decoder module 403, the determination of the current words in the skill subject label and the description text is completed, the calculation accuracy is improved, the automatic generation of the target recruitment position description text is realized, the labor cost is reduced, and the recruitment efficiency is improved.
  • the subject predicting sub-module includes a second cyclic neural network sub-module for obtaining a first sequence of feature vectors based on a hidden layer feature state vector of the cyclic neural network predicting the previous sentence in the text generating sub-module, an embedded characterization vector corresponding to a skill subject label of the previous sentence, and a target skill subject distribution vector; a subject generating sub-module for predicting a skill subject label of a current sentence based on the first sequence feature vector and the first vector sequence.
  • the subject predicting sub-module 4031 may include the second cyclic neural network sub-module 40311 and the subject generating sub-module 40312.
  • the second cyclic neural network sub-module 40311 obtains the hidden layer feature state vector of the cyclic neural network predicting the previous sentence in the text generating sub-module 4032, obtains the embedded characterization vector corresponding to the skill subject label of the previous sentence, and obtains the target skill subject distribution vector ⁇ s , and obtains the first sequence feature vector by calculation.
  • the hidden layer feature state vector of the cyclic neural network of the previous sentence may be represented by h j ⁇ 1 , M j ⁇ 1 c c
  • the skill subject label of the previous sentence may be represented by t j -1
  • the embedded characterization vector corresponding to the skill subject label of the previous sentence may be represented by e j ⁇ 1 t
  • the first sequence feature vector may be represented by h j t . .
  • LSTM Long Short-Term Memory
  • h j ⁇ 1 t represents the first sequence feature vector of the previous sentence
  • e j t represents the embedded characterization vector corresponding to the skill subject label of the current sentence
  • t j represents the skill subject label of the current sentence
  • W e t represents the pre-set parameter
  • the subject generating sub-module 40312 acquires the first sequence feature vector h j t of the second cyclic neural network sub-module 40311, and predicts the skill subject label t j of the current sentence based on the first vector sequence u j t of the first attention sub-module 4021.
  • the prediction accuracy of the skill subject label of the current sentence is improved, and generation of the target recruitment position description text is facilitated.
  • the text generating sub-module includes: a third cyclic neural network sub-module for obtaining a second sequence feature vector based on the first sequence feature vector and the predicted word embedding characterization vector of the previous word; an intermediate processing submodule for obtaining a pre-generated word probability vector according to the second vector sequence and the second sequence feature vector; and a copy mechanism submodule configured to process the pre-generated word probability vector based on the first skill subject word distribution parameter to obtain the current word in the predicted target recruitment position description text.
  • the text generating sub-module 4032 may include a third cyclic neural network sub-module 40321, an intermediate processing sub-module 40322, and a copy mechanism sub-module 40323.
  • the third cyclic neural network submodule 40321 obtains the first sequence feature vector h j t and the predicted word embedding characterization vector of the previous word, to obtain the second sequence feature vector.
  • the predicted word embedding characterization vector of the previous word may be represented by e j , k ⁇ 1 c
  • the second sequence feature vector may be represented by h j , k c .
  • h j , k ⁇ 1 c represents a second sequence feature vector of a previous word
  • e j , k c represents a word embedding characterization vector of a current word
  • y j,k represents a pre-generated word probability vector
  • W e c represents a pre-set parameter.
  • the intermediate processing sub-module 40322 obtains a pre-generated word probability vector based on the second vector sequence u j , k c and the second sequence feature vector h j , k c , and the pre-generated word probability vector may be represented by y j,k .
  • the copy mechanism sub-module 40323 processes the pre-generated word probability vector based on the first skill subject word distribution parameter to obtain the current word s j , k in the predicted target recruitment position description text.
  • the first skill subject word distribution parameter may be pre-defined, represented by ⁇ s .
  • One embodiment of the above-mentioned application has the advantage that the automatic generation of the target recruitment position description text is realized by dividing the description text generation sub-module into an encoder module, an attention module and a decoder module.
  • the problem of art manually extraction of position information by human resource employees in the prior is solved, the subjectivity of people is reduced, the time and cost for generating a recruitment position description text is saved, errors caused by the fact that human resource employees have field gaps for professional skills of different positions are avoided, accurate matching of recruitment positions with personnel to be recruited is facilitated, and recruitment efficiency is improved.
  • FIG. 5 is an illustrative flow chart of a method of training the deep neural network model according to an embodiment of the present disclosure, which is further optimized on the basis of the above-described embodiment to train the deep neural network to generate recruitment position description text.
  • the method may be performed by a training apparatus of the deep neural network model, which may be implemented in software and/or hardware, and may be integrated in an electronic device having a computing capability.
  • the training method of the deep neural network model provided in the present embodiment may include following steps.
  • S510 includes obtaining a first training sample data, and using the first training sample data to preliminarily train the pre-constructed text subject predicting sub-model to obtain the preliminary trained text subject predicting sub-model; where the first training sample data includes a first sample-related text of the first sample position and a first standard recruitment position description text corresponding to the first sample position;
  • the first training sample data may include a first sample-related text of the first sample position and a first standard recruitment position description text corresponding to the first sample position.
  • the first sample-related text may include at least one of a resume text of a person that has been determined to meet a position requirement of the first sample position, a text including position responsibility data, and a text including project data related to position, and the first standard recruitment position description text is a standard recruitment position description text corresponding to the first sample position that has been edited.
  • FIG. 6 is an illustrative structural diagram of a deep neural network model in which a text subject prediction sub-model 610 and a description text generation sub-model 620 are pre-constructed.
  • the text subject predicting sub-model 610 is initially trained by the first training sample data, and the training result is corrected according to the first standard recruitment position description text to obtain the initially trained text subject predicting sub-model 610.
  • the text subject predicting sub-model further includes: a second subject distribution determining module for obtaining an original skill subject distribution vector according to a skill subject vector distribution parameter; a first text reconstruction submodule for generating predicted bag-of-word characterization data of the reconstructed original text related to the target position according to the second skill subject word distribution parameter and the original skill subject distribution vector; and a second text reconstruction sub-module for generating the reconstructed predictive bag-of-word characterization data of the standard recruitment position description text according to the first skill subject word distribution parameter and the target skill subject distribution vector.
  • the text subject prediction sub-model 610 may include a bag-of-word feature extracting module 601, a distribution parameter calculating module 602, a first subject distribution determining module 603, a second subject distribution determining module 604, a first text reconstruction sub-module 605, and a second text reconstruction sub-module 606.
  • the second subject distribution determining module 604 receives a skill subject vector distribution parameter ⁇ d and ⁇ d of the distribution parameter calculating module 602, and calculates an original skill subject distribution vector, which may be represented by ⁇ d .
  • the second subject distribution determining module 604 may include a second subject vector sampling sub-module 6041, a third fully connected network submodule 6042, and a second subject distribution feature calculation sub-module 6043.
  • the second subject vector sampling sub-module 6041 is configured to obtain the second skill subject vector for the first skill subject vector distribution sub-parameter and the second skill subject vector distribution sub-parameter.
  • the second skill subject vector may be calculated based on ⁇ d and ⁇ d , and the second skill subject vector may be represented by z d .
  • the third fully connected network sub-module 6042 is configured to perform feature extraction for the second skill subject vector to obtain the second subject feature vector.
  • the subject vector z d ⁇ N ⁇ d ⁇ d 2 may be obtained by sampling.
  • the third fully connected network sub-module 6042 may include one or more layers of fully connected networks.
  • the fully connected network in the third fully connected network sub-module 6042 may be represented by f ⁇ d .
  • the second subject distribution feature calculating sub-module 6043 receives the second subject feature vector of the third fully connected network submodule 6042, and obtains the original skill subject distribution vector according to the pre-set second activation function.
  • the second activation function may be represented by soft max ( f ⁇ d )
  • the first text reconstruction sub-module 605 obtains the original skill subject distribution vector ⁇ d , and obtains the reconstructed predicted bag-of-word characterization data of the original text related to the target position according to the pre-defined second skill subject word distribution parameter.
  • the second skill subject word distribution parameter may be represented by ⁇ d
  • the second text reconstruction sub-module 606 obtains the predicted bag-of-word characterization data of the reconstructed standard recruitment position description text according to the first skill subject word distribution parameter and the target skill subject distribution vector ⁇ s , and the first skill subject word distribution parameter may be represented by ⁇ s .
  • the bag-of-word characterization data prediction probability of the second text reconstruction sub-module 606 is calculated by: p s j
  • ⁇ s , ⁇ s ) represents the prediction probability of the bag-of-word characterization data of the standard recruitment position description text
  • M j s represents the number of words in the j th sentence after the bag-of-word feature selection
  • ⁇ * , s j , k s represents vector expression of the current sentence by the first skill subject word distribution parameter ⁇ s .
  • using the first training sample data to preliminarily training the pre-constructed text subject predicting sub-model to obtain a preliminary trained text subject predicting sub-model may include: inputting the first sample-related text into the pre-constructed text subject predicting sub-model; calculating a first loss function value based on a first disparity information and a second disparity information by using a neural variation method; and adjusting the network parameters in the pre-constructed text subject predicting sub-model according to the calculated first loss function value until reaching the threshold of the number of iterations or the convergence of the value of the loss function, where the first disparity information is a disparity information between a first predictive bag-of-word characterization data output by the first text reconstruction sub-module and the bag-of-word characterization data of the text related to the first sample output by the bag-of-word feature extracting module, and the second disparity information is disparity information between the second predictive bag-of-word characterization data output by the second text reconstruction sub-module and the bag-of-
  • the bag-of-word feature extracting module 601 outputs the bag-of-word characterization data X i bow of the first sample-related text
  • the first text reconstruction sub-module 605 outputs the first predicting bag-of-word characterization data X i ⁇ bow
  • the disparity information between X i bow and X i ⁇ bow is the first disparity information.
  • the second prediction bag-of-word characterization data is output by the second text reconstruction sub-module 606, and the disparity information between the second prediction bag-of-word characterization data and the bag-of-word characterization data of the first standard recruitment position description text is used as the second disparity information.
  • the first loss function value is calculated by the neural variational method, and the network parameters in the text subject prediction sub-model 610 are adjusted according to the first loss function value until reaching the threshold value of the number of iteration or the convergence of the value of the loss function, so that the bag-of-word characterization data output by the text subject predicting sub-model 610 meets the requirement of the bag-of-word characterization data of the first standard recruitment position description text.
  • D KL represents a Kullback-Leiblerdivergence distance (relative entropy distance)
  • ⁇ * , x k d represents a vector expression of the current sentence, i.e., the k th word by the second skill subject word distribution parameter ⁇ d
  • ⁇ * , s j , k s represents a vector expression of the j th sentence by the first skill subject word distribution parameter
  • p ( ⁇ d ) and p ( ⁇ s ) represent actual probability distribution of the data
  • q ( ⁇ d ) and q ( ⁇ s ) represent an estimated probability distribution function of a neural variation approximation.
  • the preliminary training of the text subject predicting sub-model 610 is completed, and the accuracy of the text subject prediction is achieved.
  • S520 includes obtaining a second training sample data; where the second training sample data includes a second sample related text of the second sample position and a second standard recruitment position description text corresponding to the second sample position.
  • the acquired second training sample data is obtained.
  • the second sample-related text includes at least one of a resume text of a person that has been determined to meet a post requirement of the second sample post, a text containing position responsibility data, and a text containing project data related to position.
  • the second standard recruitment position description text is a standard recruitment post description text corresponding to the edited second sample post.
  • S530 includes using the second training sample data to train the deep neural network model including the initially trained text subject predicting sub-model and the pre-constructed description text generating sub-model, to obtain the trained deep neural network model.
  • the preliminarily trained text subject predicting sub-model 610 and the pre-constructed description text generating sub-model 620 are trained by the second training sample data, and the output result of the deep neural network model is corrected according to the second standard recruitment position description text to obtain the trained deep neural network model.
  • using the second training sample data to train the deep neural network model includes:
  • the second sample related text is input to the text subject predicting sub-model 610 and the description text generating sub-model 620.
  • the first prediction bag-of-word characterization data is output by the first text reconstruction sub-module 605 in the text subject predicting sub-model 610
  • the bag-of-word characterization data of the second sample related text is output by the bag-of-word feature extracting module 601 in the text subject predicting sub-model 610
  • the disparity information between the first prediction bag-of-word characterization data and the bag-of-word characterization data of the second sample related text is used as the third disparity information.
  • the second predictive bag-of-word characterization data is output by the second text reconstruction sub-module 606, and the disparity information between the second predictive bag-of-word characterization data and the bag-of-word characterization data of the second standard recruitment position description text is used as the fourth disparity information.
  • the second loss function value may be calculated by using a neural variation method.
  • the description text is output by the description text generating sub-model 620, the disparity information between the second standard recruitment position description text and the output description text is used as the fifth disparity information, and the third loss function value is calculated based on the fifth disparity information. Determining an overall loss function value based on the calculated second loss function value, the third loss function value, and a corresponding weight.
  • the network parameters in the text subject predicting sub-model and the description text generating sub-model are adjusted according to the total loss function value until reaching the threshold value of the number of iterations or the total loss function value convergence, so that the deep neural network model 600 can output recruitment position description text meeting the requirements.
  • the calculation of the total loss function for the text subject predicting sub-model 610 and the description text generating sub-model 620 improves the accuracy of the generation of description text of the deep neural network model 600, avoids inaccuracy of the description text due to subjectivity and field differences, and improves the description text generation efficiency.
  • FIG. 7 is an illustrative schematic diagram of a deep neural network model 700 including a text subject predicting sub-model 710 and a description text generating sub-model 720, according to an embodiment of the present disclosure.
  • the text subject predicting sub-model 710 includes a bag-of-word feature extracting module 701, a distribution parameter calculating module 702 and a first subject distribution determining module 703, a second subject distribution determining module 704, a first text reconstruction sub-module 705, and a second text reconstruction sub-module 706.
  • the bag-of-word feature extracting module 701 includes a bag-of-word generating sub-module 7011 and a first fully connected network sub-module 7012.
  • the distribution parameter calculating module 702 includes a first parameter calculating submodule 7021 and a second parameter calculating sub-module 7022.
  • the first subject distribution determining module 703 includes a third parameter calculating sub-module 7031, a fourth parameter calculating sub-module 7032, a first subject vector sampling sub-module 7033, a second fully connected network sub-module 7034 and a first subject distribution feature calculating sub-module 7035.
  • the second subject distribution determining module 704 includes a second subject vector sampling sub-module 7041, a third fully connected network sub-module 7042 and a second subject distribution feature calculating sub-module 7043.
  • the predicted bag-of-word characterization data of the reconstructed standard recruitment position description text is represented as s j bow . .
  • the description text generating sub-model 720 includes an encoder module 707, an attention module 708, and a decoder module 709.
  • the encoder module 707 includes a word vector generating sub-module 7071 and a first cyclic neural network sub-module 7072.
  • the attention module 708 includes a first attention sub-module 7081 and a second attention sub-module 7082.
  • the decoder module 709 includes a subject prediction sub-module 7091 and a text generating sub-module 7092.
  • the subject predicting sub-module 7091 includes a second cyclic neural network sub-module 70911 and a subject generating sub-module 70912.
  • the text generating sub-module 7092 includes a third cyclic neural network sub-module 70921, an intermediate processing sub-module 70922, and a copy mechanism sub-module 70923.
  • the k th word in the original text related to the target position or the sample related text is represented as x k .
  • One embodiment of the above-mentioned application has the advantage of achieving preliminary training of the text subject prediction sub-model by obtaining the first training sample data; By obtaining the second training sample data, a deep neural network model is further trained, so that the description text output by the deep neural network model meets the requirements of the standard text, the accuracy of the description text is improved, and the output efficiency of the target position description text is further improved.
  • FIG. 8 is a schematic structural diagram of an apparatus for generating a recruitment position description text according to an embodiment of the present disclosure.
  • the method for generating a recruitment position description text according to the embodiment of the present disclosure may be executed, and the apparatus has a function module and a beneficial effect corresponding to the execution method.
  • the apparatus 800 may include: an original text obtaining module 801, configured to obtain an original text related to a target position; and a description text generating module 802 configured to generate the target recruitment position description text corresponding to the target position based on the original text related to the target position and the pre-trained deep neural network model.
  • the original text related to the target position includes at least one of a resume text of a person determined to meet a position requirement, a text containing position responsibility data, and a text containing project data related to position.
  • the deep neural network model includes: a text subject predicting sub-model for predicting a target skill subject distribution vector based on the original text related to the target position; and a description text generating sub-model for generating a target recruitment position description text of the target position according to the target skill subject distribution vector.
  • the text subject predicting sub-model includes: a bag-of-word feature extracting module for extracting a bag-of-word feature vector of the original text related to the target position; a distribution parameter calculating module for calculating a skill subject vector distribution parameter according to the bag-of-word feature vector and a non-linear network layer; a first subject distribution determining module for obtaining a target skill subject distribution vector according to the skill subject vector distribution parameter and a pre-set subject distribution hypothesis parameter.
  • the bag-of-word feature extracting module includes: a bag-of-word generating sub-module for generating bag-of-word characterization data of the original text related to the target position; and a first fully connected network sub-module for performing feature extraction for the bag-of-word characterization data to obtain a bag-of-word feature vector.
  • the distribution parameter calculating module includes: a first parameter calculating sub-module for calculating a first skill subject vector distribution sub-parameter according to the bag-of-word feature vector and the first non-linear network layer; and a second parameter calculating sub-module for calculating a second skill subject vector distribution sub-parameter based on the bag-of-word feature vector and the second non-linear network layer.
  • the first subject distribution determining module includes: a third parameter calculating sub-module for calculating a third skill subject vector distribution parameter according to the first skill subject vector distribution sub-parameter and the first pre-set subject distribution hypothesis sub-parameter; a fourth parameter calculating sub-module for calculating a fourth skill subject vector distribution parameter according to the second skill subject vector distribution sub-parameter and the second pre-set subject distribution hypothesis sub-parameter; a first subject vector sampling sub-module for obtaining a first skill subject vector according to a third skill subject vector distribution parameter and a fourth skill subject vector distribution parameter; a second fully connected network sub-module for performing feature extraction for the first skill subject vector to obtain the first subject feature vector; and a first subject distribution feature calculating sub-module for obtaining a target skill subject distribution vector based on the first subject feature vector and the first activation function.
  • the description text generating sub-model includes: an encoder module for generating a sequence of semantic characterization vectors of the current sentence in the original text related to the target position; an attention module for performing weighted transformation for the sequence of the semantic characterization vectors according to the target skill subject distribution vectors; a decoder module for predicting a skill subject label of a current sentence according to the weighted and transformed semantic characterization vector sequence, and predicting the current word of the target recruitment position description text according to the skill subject label.
  • the encoder module includes: a word vector generating submodule for generating a word vector of each word included in the current sentence in the original text related to the target position; and a first cyclic neural network sub-module for generating a sequence of semantic characterization vectors of the current sentence according to each word vector.
  • the attention module includes a first attention sub-module and a second attention sub-module;
  • the decoder module includes a subject predicting submodule and a text generating sub-module.
  • the first attention sub-module is configured for performing weighted transformation on the semantic characterization vector sequence according to the target skill subject distribution vector and the hidden layer feature state vector in the subject predicting sub-module to obtain a weighted and transformed first vector sequence.
  • the second attention submodule is configured for performing weighted transformation on the semantic characterization vector sequence according to the target skill subject distribution vector and the hidden layer feature state vector in the text generating sub-module to obtain a weighted and transformed second vector sequence.
  • the subject predicting sub-module is configured for predicting a skill subject label of a current sentence based on a target skill subject distribution vector and a first vector sequence.
  • the text generating sub-module is configured for predicting the current word in the target recruitment position description text based on the skill subject label of the current sentence and the second vector sequence.
  • the subject predicting sub-module includes: a second cyclic neural network sub-module and a subject generating sub-module.
  • the second cyclic neural network sub-module is configured for obtaining a first sequence feature vector, based on a hidden layer feature state vector of a cyclic neural network predicting a previous sentence in the text generating sub-module, an embedded characterization vector corresponding to a skill subject label of the previous sentence, and a target skill subject distribution vector.
  • the subject generating sub-module is configured for predicting a skill subject label of a current sentence based on the first sequence feature vector and the first vector sequence.
  • the text generation sub-module includes: a third circulating neural network sub-module for obtaining a second sequence feature vector based on the first sequence feature vector and the predicted word embedding characterization vector of the previous word; an intermediate processing sub-module for obtaining a pre-generated word probability vector according to the second vector sequence and the second sequence feature vector; and a copy mechanism sub-module for processing the pre-generated word probability vector based on the first skill subject word distribution parameter, to obtain the current word in the predicted target recruitment position description text.
  • a third circulating neural network sub-module for obtaining a second sequence feature vector based on the first sequence feature vector and the predicted word embedding characterization vector of the previous word
  • an intermediate processing sub-module for obtaining a pre-generated word probability vector according to the second vector sequence and the second sequence feature vector
  • a copy mechanism sub-module for processing the pre-generated word probability vector based on the first skill subject word distribution parameter, to obtain the current word in the predicted target recruitment position description text.
  • the training process of the deep neural network model includes:
  • the text subject prediction sub-model further includes: a second subject distribution determining module for obtaining an original skill subject distribution vector based on the skill subject vector distribution parameter; a first text reconstruction sub-module for generating predicted bag-of-word characterization data of the reconstructed original text related to the target position, based on the second skill subject word distribution parameter and the original skill subject distribution vector; and a second text reconstruction sub-module for generating the predictive bag-of-word characterization data of the reconstructed standard recruitment position description text, based on the first skill subject word distribution parameter and the target skill subject distribution vector.
  • the process of preliminary training is performed on the pre-constructed text subject prediction sub-model by using the first training sample data, to obtain a preliminary trained text subject prediction sub-model, including:
  • using the second training sample data to train the deep neural network model including the initially trained text subject predicting sub-model and the pre-constructed description text generating sub-model, to obtain the trained deep neural network model includes:
  • One embodiment of the above application has the advantage or good effects of automatically extracting the data in the original text related to the target position through the deep neural network model, to obtain the target recruitment position description text corresponding to the target position.
  • the method solves the problems that in the prior art, position information is manually extracted by a human resource employee, the recruitment position description text is manually written, thus human subjectivity is reduced, generation time and cost of recruitment position description text are saved, errors caused by the fact that human resource employees have field gaps for professional skills of different positions are avoided, accurate matching between recruitment positions and personnel to be recruited is realized, and generation efficiency and recruitment efficiency of recruitment position description text are improved.
  • the present disclosure also provides an electronic device and a readable storage medium.
  • FIG. 9 is a block diagram of an electronic device of a method for generating a recruitment position description text according to an embodiment of the present disclosure.
  • Electronic devices are intended to represent various forms of digital computers, such as laptop computers, desktop computers, worktables, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers.
  • Electronic devices may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smart phones, wearable devices, and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions are by way of example only and are not intended to limit the implementation of the present disclosure as described and/or claimed herein.
  • the electronic device includes one or more processor 901, a memory 902, and an interface for connecting components, including a high speed interface and a low speed interface.
  • the various components are interconnected by
  • the processor may process instructions executed within the electronic device, including instructions stored in or on a memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to an interface.
  • an external input/output device such as a display device coupled to an interface.
  • multiple processors and/or multiple buses may be used with multiple memories and multiple memories, if desired.
  • a plurality of electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system).
  • a processor 901 is exemplified in FIG. 9 .
  • the memory 902 is a non-transitory computer readable storage medium provided in this application.
  • the memory stores instructions executable by at least one processor to cause the at least one processor to perform the generation method of the recruitment post description text provided in the present disclosure.
  • the non-transitory computer-readable storage medium of the present disclosure stores computer instructions for causing a computer to execute the generation method of the recruitment position description text provided in the present disclosure.
  • the memory 902 as a non-transitory computer readable storage medium, can be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the generation method of the recruitment position description text in the embodiment of the present disclosure.
  • the processor 901 executes various functional applications and data processing of the server by running non-transitory software programs, instructions, and modules stored in the memory 902, that is, implements the generation method of the recruitment post description text in the above-described method embodiment.
  • the memory 902 may include a storage program area and a storage data area, where the storage program area may store an operating system, an application program required by at least one function; the storage data area may store data or the like created according to the use of the electronic device of the generation method of the recruitment post description text.
  • memory 902 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device.
  • memory 902 may optionally include remotely disposed memory relative to processor 901, which may be connected via a network to an electronic device of a method of generating recruitment position description text. Examples of such networks include, but are not limited to, the Internet, enterprise intranets, local area networks, mobile communication networks, and combinations thereof.
  • the electronic device for generating the recruitment position description text may further include an input device 903 and an output device 904.
  • the processor 901, the memory 902, the input device 903, and the output device 904 may be connected via a bus or other manners, a bus connection is illustrated in FIG. 9 .
  • the input device 903 may receive input digit or character information, and generate key signal input related to user settings and functional control of an electronic device of the recruitment position description text generation method, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer bar, one or more mouse buttons, a track ball, a joystick, or the like.
  • the output device 904 may include a display device, an auxiliary lighting device (e.g., an LED), a tactile feedback device (e.g., a vibration motor), and the like.
  • the display device may include, but is not limited to, a liquid crystal display (LCD), a light emitting diode (LED) display, and a plasma display. In some embodiments, the display device may be a touch screen.
  • the various embodiments of the systems and techniques described herein may be implemented in digital electronic circuit systems, integrated circuit systems, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs that may execute and/or interpret on a programmable system including at least one programmable processor, which may be a dedicated or general purpose programmable processor, may receive data and instructions from a memory system, at least one input device, and at least one output device, and transmit the data and instructions to the memory system, the at least one input device, and the at least one output device.
  • a programmable processor which may be a dedicated or general purpose programmable processor, may receive data and instructions from a memory system, at least one input device, and at least one output device, and transmit the data and instructions to the memory system, the at least one input device, and the at least one output device.
  • machine-readable medium and “computer-readable medium” refer to any computer program product, device, and/or means (e.g., magnetic disk, optical disk, memory, programmable logic device (PLD)) for providing machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as machine-readable signals.
  • machine readable signal refers to any signal used to provide machine instructions and/or data to a programmable processor.
  • the systems and techniques described herein may be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to a computer.
  • a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • a keyboard and a pointing device e.g., a mouse or a trackball
  • Other types of devices may also be used to provide interaction with a user; for example, the feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
  • the systems and techniques described herein may be implemented in a computing system including a background component (e.g., as a data server), or a computing system including a middleware component (e.g., an application server), or a computing system including a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user may interact with embodiments of the systems and techniques described herein), or a computing system including any combination of such background component, middleware component, or front-end component.
  • the components of the system may be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (LAN), a wide area network (WAN), a block chain network, and the Internet.
  • the computer system may include a client and a server.
  • the client and server are typically remote from each other and typically interact through a communication network.
  • the relationship between the client and the server is generated by a computer program running on the corresponding computer and having a clientserver relationship with each other.
  • the data in the original text related to the target position is automatically extracted through the deep neural network model to obtain the target recruitment position description text corresponding to the target position.
  • the method solves the problems that in the prior art, position information is manually extracted by a human resource employee, recruitment position description text is manually written, human subjectivity is reduced, generation time and cost of recruitment position description text are saved, errors caused by the fact that human resource employees have field gaps for professional skills of different positions are avoided, accurate matching between recruitment positions and personnel to be recruited is realized, and generation efficiency and recruitment efficiency of recruitment position description text are improved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Strategic Management (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Economics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Operations Research (AREA)
  • Marketing (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Machine Translation (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure discloses a method, apparatus, device and medium for generating a recruitment position description text, and relates to the technical field of artificial intelligence. The specific implementation solution may include: obtaining the original text related to the target position; generating a target recruitment position description text corresponding to the target position based on the original text and a pre-trained deep neural network model. The target recruitment position description text is automatically generated through the deep neural network, so that the personnel and the position are accurately matched, the human resource and the time of the recruitment process are reduced, and the recruitment efficiency is improved.

Description

    TECHNICAL FIELD
  • Embodiments of the present disclosure relate to computer technologies, and more particularly, to the technical field of artificial intelligence, and more particularly, to a method, apparatus, device, and medium for generating a recruitment position description text.
  • BACKGROUND
  • A recruitment position description shows the responsibilities of a position and skill requirements, and an efficient position description will help the employer to find the right person for the position and provide the candidate with a clear understanding of the responsibilities and qualifications for the particular position.
  • In the related technology, to obtain a match between the recruitment position and the people to be recruited, it needs to analyze the recruitment market by the human resource experts, and write manually the description of the recruitment position, and thus the human subjectivity is strong, and a large amount of human cost is required. In addition, because human resources employees have domain gaps for the professional skills of different positions, there are always some deviations, resulting in a failure of accurate match between the recruitment position and the people to be recruited, thus the recruitment efficiency is low.
  • SUMMARY
  • Embodiments of the present disclosure provide a method, apparatus, device and medium for generating a recruitment position description text, to accurately describe a recruitment position and improve the efficiency of generating a position description.
  • According to a first aspect, there is provided a method of generating a recruitment position description text, including:
    • obtaining the original text related to the target position;
    • generating a target recruitment position description text corresponding to the target position based on the original text related to the target position and a pre-trained deep neural network model.
  • According to a second aspect, there is provided an apparatus for generating a recruitment position description text, including:
    • an original text acquisition module, for acquiring an original text related to a target position;
    • a description text generating module, for generating a target recruitment position description text corresponding to the target position based on the original text related to the target position and a pre-trained deep neural network model.
  • According to a third aspect, there is provided an electronic device including:
    • at least one processor; and
    • a memory in communication connection with the at least one processor; where,
    • the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform a method of generating a recruitment post description text described in any of the embodiments of the present disclosure.
  • According to a fourth aspect, there is provided a non-transitory computer-readable storage medium having stored thereon computer instructions for causing the computer to perform a method of generating a recruitment post description text as described in any of the embodiments of the present disclosure.
  • According to the technology of the present disclosure, the problem of manually describing a recruitment position in text in the prior art is solved. With a deep neural network, a target recruitment position description text may be automatically and quickly generated, and the generated target recruitment position description text may be matched with the requirements of the target position, thereby improving the generation efficiency and accuracy of the recruitment position description text, and further reducing the human resource and time of the recruitment process, and improving the recruitment efficiency.
  • It is to be understood that the description in this section does not intend to identify key or critical features of the embodiments of the disclosure, nor does it intend to limit the scope of the disclosure. Other features of the present disclosure will become readily apparent from the following description.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The drawings are intended to provide a better understanding of the present disclosure and are not to be construed as limiting the application, where:
    • FIG. 1 is an illustrative flowchart of a method for generating a recruitment position description text according to an embodiment of the present disclosure;
    • FIG. 2 is an illustrative structural diagram of a deep neural network model according to an embodiment of the present disclosure;
    • FIG. 3 is an illustrative structural diagram of a deep neural network model according to an embodiment of the present disclosure;
    • FIG. 4 is an illustrative structural diagram of a deep neural network model according to an embodiment of the present disclosure;
    • FIG. 5 is an illustrative flow diagram of a training method of a deep neural network model according to an embodiment of the present disclosure;
    • FIG. 6 is an illustrative structural diagram of a deep neural network model according to an embodiment of the present disclosure;
    • FIG. 7 is an illustrative structural diagram of a deep neural network model according to an embodiment of the present disclosure;
    • FIG. 8 is an illustrative structural diagram of an apparatus for generating a recruitment position description text according to an embodiment of the present disclosure; and
    • FIG. 9 is a block diagram of an electronic device used to implement a method of generating a recruitment position description text according to an embodiment of the present disclosure.
    DETAILED DESCRIPTION OF EMBODIMENTS
  • Exemplary embodiments of the present disclosure are described below in connection with the accompanying drawings, in which various details of the embodiments of the present disclosure are included to facilitate understanding, and are to be considered as exemplary only. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications may be made to the embodiments described herein without departing from the scope and spirit of the present disclosure. Also, for clarity and conciseness, descriptions of well-known functions and structures are omitted from the following description.
  • FIG. 1 is an illustrative flow chart of a method for generating a recruitment position description text according to an embodiment of the present disclosure, which is used for automatically generating a recruitment position description text. The method may be performed by a device for generating a recruitment position description text, which may be implemented in software and/or hardware, and may be integrated into an electronic device with computing-capability. As shown in FIG. 1, a method for generating the recruitment position description text according to the present embodiment may include following steps.
  • S110 includes obtaining an original text related to a target position.
  • The original text related to the target position collected in advance by the staff is obtained.
  • In the present embodiment, optionally, the original text includes at least one of a resume text of a person who has been determined to meet the position requirement, a text containing position responsibility data, and a text containing project data related to the position.
  • Specifically, the resume text of the person who has determined to meet the position requirements may include the resume text of a person who has already enrolled and the resume text of a person who has passed the review and to be enrolled. A staff collects the resume text of an enrolled person and the resume text of a person who has passed a review and is about to be enrolled in advance, and collects responsibility data of different positions as text containing position responsibility data, and collects project or engineering data related to different positions as the text containing project data related to position. For example, the contents written in the resume of an employee may be that: a professional research direction for an employee is an intelligent robot, and the content written in the text containing the project data related to position may be that: a target position project refers to an intelligent robot obstacle avoidance project. By obtaining the original text, valid information related to the duty and skill requirements of the target position may be extracted, which facilitates an accurate matching of the generated target recruitment position description text with the duty and skill requirements of the target position.
  • S120 includes generating a target recruitment position description text corresponding to the target position based on the original text and a pre-trained deep neural network model.
  • The deep neural network model is a model pre-trained to generate target recruitment position description text. The target recruitment position description text includes description of duties and skills of the target position and the like, which shall be presented to the position seeker. The original text related to the target position is input to the pre-trained deep neural network model, and the related data of the target position is extracted from the original text related to the target position by the deep neural network model. For example, data such as a current position of an enrolled person, a research direction of an enrolled person, and a current project of an enrolled person may be extracted from the resume texts of an enrolled person, and data such as a position willingness of a to-be-enrolled person and a research direction of the to-be-enrolled person may be extracted from the resume texts of a person that has passed review and will be enrolled, and data such as a main responsibility, a work task, and a professional requirement of the position may be extracted from the text including position responsibility data, and data such as a historical project and a current project of the position may be extracted from a text including position data related to project.
  • After obtaining the original text related to the target position, the deep neural network model generates the target recruitment position description text corresponding to the target position based on the extracted data.
  • One embodiment of the above-mentioned application has the advantage or beneficial effect that the target recruitment position description text may be automatically and quickly generated through the deep neural network, and the generated target recruitment position description text may be matched with the needs of the target position, thereby improving the generation efficiency and accuracy of the recruitment position description text, and further reducing the human resource and time of the recruitment process and improving the recruitment efficiency.
  • FIG. 2 is an illustrative structural diagram of a deep neural network model according to an embodiment of the present disclosure, which may execute the method as discussed in the above. As shown in FIG. 2, the present embodiment provides a deep neural network model 200 that may include:
    • a text subject predicting sub-model 210 for predicting a target skill subject distribution vector based on the original text related to the target position; and
    • a description text generating sub-model 220 for generating the target recruitment position description text of the target position according to the target skill subject distribution vector.
  • The target skill subject distribution vector is a skill subject distribution vector of the target position, and the skill subject refers to a category name of a job skill required by the position. For example, the skill subject may include a coding type of skill subject, a machine learning type of skill subject, a big data type of skill subject, and the like.
  • The text subject predicting sub-model 210 obtains the original text related to the target position, and extracts the skill subject data of the target position from the original text related to the target position. For example, the project name of the enrolled person at the target position may be extracted, and the skill subject of the project may be obtained based on the project name. The target skill subject distribution vector may be predicted based on the related data of the target position, thereby determining the skill subject of the target position.
  • After the text subject predicting sub-model 210 determines the target skill subject distribution vector, the target skill subject distribution vector is transmitted to the description text generating sub-model 220 by the text subject predicting sub-model 210, and the description text generating sub-model 220 generates the target recruitment position description text according to the target skill subject distribution vector, to facilitate text description for the target position. For example, if the target position is a software engineer, and the target skill subject distribution vector of the position is a coding type of skill subject, the target recruitment position description text finally generated may be "software engineer: Requires proficient use of JAVA and C + +, and more than three years of working experience."
  • One embodiment of the above-mentioned application has the advantage or advantageous effect of dividing the deep neural network model into a text subject predicting sub-model and a description text generating sub-model, which reduces the manual operation steps and saves human resource and time, thus realizes separate steps of determining the skill subject and the description text of the target position. Accordingly, the description text is obtained according to the skill subject and thus improve the accuracy and efficiency of the target recruitment position description text.
  • FIG. 3 is an illustrative structural diagram of a deep neural network model according to an embodiment of the present disclosure, which is further optimized on the basis of the above-described embodiment. As shown in FIG. 3, the deep neural network model 300 provided by the present embodiment may include: a text subject predicting sub-model 310 and a description text generating sub-model 320.
  • In the present embodiment, optionally, the text subject predicting sub-model 310 includes: a bag-of-word feature extraction module for extracting a bag-of-word feature vector of the original text related to the target position; a distribution parameter calculation module for calculating a skill subject vector distribution parameter according to bag-of-word feature vector and non-linear network layer; a first subject distribution determining module for obtaining a target skill subject distribution vector according to the skill subject vector distribution parameter and a pre-set subject distribution hypothesis parameter.
  • The bag-of-word feature extracting module 301 extracts a bag-of-word feature vector from the original text related to the target position, after obtaining the original text related to the target position. For example, the original text related to the target position is "software engineer needs programming basis" and "software engineer needs to be seriousness and down-to-earth,", and the bag-of-word feature vector may be expressed as [111100] and [110011].
  • The bag-of-word feature extracting module 301 sends the bag-of-word feature vector to the distribution parameter calculating module 302, and the distribution parameter calculating module 302 calculates the skill subject vector distribution parameter according to the bag-of-word feature vector and the pre-set non-linear network layer. The distribution parameter calculating module 302 sends the skill subject vector distribution parameter to the first subject distribution determining module 303, and the first subject distribution determining module 303 calculates the target skill subject distribution vector according to the skill subject vector distribution parameter and the pre-set subject distribution hypothesis parameter set in advance.
  • By dividing the text subject predicting sub-model 310 into three submodules, the well-organized calculation of the target skill subject distribution vector is realized, the calculation accuracy is improved, the manual operation is reduced, the process of manually determining the skill subject is avoided, and the calculation efficiency of the target skill subject distribution vector is improved.
  • In this embodiment, optionally, the bag-of-word feature extracting module includes a bag-of-word generating sub-module for generating bag-of-word characterization data of the original text related to the target position; a first fully connected network sub-module for performing feature extraction of the bag-of-word characterization data to obtain a bag-of-word feature vector.
  • Specifically, the bag-of-word feature extracting module 301 may include a bag-of-word generating sub-module 3011 and a first fully connected network submodule 3012. The first fully connected network sub-module 3012 may include one or more layers of fully connected networks. After receiving the original text related to the target position, the deep neural network model is acquired by the bag-of-word generating submodule 3011 in the bag-of-word feature extracting module 301. The bag-of-word generating sub-module 3011 extracts the bag-of-word characterization data in the original text related to the target position, for example, the original text related to the target position is "software engineer needs programming basis" and "software engineer needs to be seriousness and down-to-earth", and the extracted bag-of-word characterization data is "software engineer, need, programing, basis, seriousness, down-to-earth," which is represented as X i bow .
    Figure imgb0001
    The bag-of-word generating sub-module 3011 sends the bag-of-word characterization data to the first fully connected network submodule 3012, and the bag-of-word characterization data may be extracted by the first fully connected network sub-module 3012 for a plurality of times to generate a bag-of-word feature vector, where fed may represent the fully connected network in the first fully connected network sub-module 3012. The bag-of-word feature vector is generated by the bag-of-word generating sub-module 3011 and the first fully connected network sub-module 3012, so that the calculation accuracy of the bag-of-word feature vector is improved, the automatic extraction of the bag-of-word feature is realized, the manual operation is reduced, and the calculation efficiency of the target skill subject distribution vector is further improved.
  • In the present embodiment, optionally, the distribution parameter calculating module includes a first parameter calculating sub-module for calculating a first skill subject vector distribution sub-parameter according to the bag-of-word feature vector and the first non-linear network layer; a second parameter calculating sub-module for calculating a second skill subject vector distribution sub-parameter based on the bag-of-word feature vector and the second non-linear network layer.
  • Specifically, the distribution parameter calculating module 302 may include a first parameter calculating sub-module 3021 and a second parameter calculating sub-module 3022. The first parameter calculating sub-module 3021 receives the bag feature vector of the first fully connected network sub-module 3012, and calculates the first skill subject vector distribution sub-parameter based on the pre-set first non-linear network layer. The first non-linear network layer may be represented by fµd , and the first skill subject vector distribution sub-parameters may be represented by µd . The calculation formula of the first skill subject vector distribution sub-parameter µd is as follows: µ d = f µ d f e d X i bow
    Figure imgb0002
  • The second parameter calculating sub-module 3022 calculates the second skill subject vector distribution sub-parameter according to the pre-set second non-linear network layer, after receiving the bag-of-word feature vector of the first fully connected network sub-module 3012. The second non-linear network layer may be represented by fσd , and the second skill subject vector distribution sub-parameters may be represented by σd. The calculation formula of the second skill subject vector distribution sub-parameter σd is as follows: log σ d = f σ d f e d X i bow
    Figure imgb0003
  • The skill subject vector distribution parameters may include µd and σd, and by calculating the µd and σd , an accurate calculation of the skill subject vector distribution parameter is realized, thereby improving the calculation efficiency of the target skill subject distribution vector.
  • In the present embodiment, optionally, the first subject distribution determining module includes a third parameter calculation sub-module for calculating a third skill subject vector distribution parameter according to the first skill subject vector distribution sub-parameter and the first pre-set subject distribution hypothesis sub-parameter; a fourth parameter calculating sub-module for calculating a fourth skill subject vector distribution parameter according to the second skill subject vector distribution sub-parameter and the second pre-set subject distribution hypothesis sub-parameter; a first subject vector sampling sub-module for obtaining a first skill subject vector according to a third skill subject vector distribution parameter and a fourth skill subject vector distribution parameter; a second fully connected network sub-module for performing feature extraction on the first skill subject vector to obtain the first subject feature vector; a first subject distribution feature calculating sub-module for obtaining a target skill subject distribution vector based on the first subject feature vector and the first activation function.
  • Specifically, the first subject distribution determining module 303 may include a third parameter calculating sub-module 3031, a fourth parameter calculating sub-module 3032, a first subject vector sampling sub-module 3033, a second fully connected network sub-module 3034, and a first subject distribution feature sub-module 3035. The third parameter calculation sub-module 3031 receives the first skill subject vector distribution sub-parameter µd of the first parameter calculating sub-module 3021, and calculates the third skill subject vector distribution parameter according to the pre-defined first pre-set subject distribution hypothesis sub-parameter. The first pre-set subject distribution hypothesis sub-parameter may be represented by Wµ , and the third skill subject vector distribution parameter may be represented by µs , µs = Wµµd . The fourth parameter calculating sub-module 3032 receives the second skill subject vector distribution sub-parameter σd of the second parameter calculating sub-module 3022, and calculates the fourth skill subject vector distribution parameter logσs = Wσ (log σ d ), according to the pre-defined second pre-set subject distribution hypothesis sub-parameter. The second pre-set subject distribution hypothesis sub-parameter may be represented by Wσ , and the fourth skill subject vector distribution parameter may be represented by σs .
  • The first subject vector sampling sub-module 3033 receives the third skill subject vector distribution parameter µs of the third parameter calculating sub-module 3031 and the fourth skill subject vector distribution parameter σs of the fourth parameter calculating sub-module 3032, and calculates the first skill subject vector, which may represent the first skill subject vector with zs . The second fully connected network submodule 3034 receives the first skill subject vector of the first subject vector sampling sub-module 3033, and performs feature extraction for the first skill subject vector to obtain the first subject feature vector. For example, for a corresponding target recruitment description, S= {s_1, s_2,... , s_1} where s_1 is each recruitment description statement and the subject vector may be obtained by sampling z s N µ s σ s 2 .
    Figure imgb0004
    The second fully connected network sub-module 3034 may include one or more layers of fully connected networks, where the fully connected networks in the second fully connected network sub-module 3034 may be represented by fθs .
  • The first subject distribution feature calculating sub-module 3035 receives the first subject feature vector of the second fully connected network sub-module 3034, and obtains the target skill subject distribution vector according to the pre-set first activation function. The first activation function may be represented by softmax (fθs ), and the target skill subject distribution vector may be represented by θs , where θs = softmax(fθs (zs )). By performing separate steps of calculation of the target skill subject distribution vector, calculation efficiency is improved, calculation accuracy of the target skill subject distribution vector is ensured, and automation of text subject prediction is realized.
  • One embodiment of the above application has the advantage or advantage of automatically generating a target skill subject distribution vector by dividing the text subject prediction sub-model into a bag-of-word feature extracting module, a distribution parameter calculating module, and a first subject distribution determining module. The problem of manually extraction of position information by human resource employees in the prior art is solved, the subjectivity of human is reduced, generation time and cost of recruitment position description text are saved, errors caused by the fact that human resource employees have field gaps for professional skills of different positions are avoided, accurate matching of recruitment positions with recruitment personnel is facilitated, and recruitment efficiency is improved.
  • FIG. 4 is an illustrative structural diagram of a deep neural network model according to an embodiment of the present disclosure, which is further optimized on the basis of the above-described embodiment. As shown in FIG. 4, the deep neural network model 400 provided by the present embodiment may include: a text subject predicting sub-model 410 and description text generating sub-model 420.
  • In the present embodiment, optionally, the description text generating sub-model includes an encoder module for generating a sequence of semantic characterization vectors of the current sentence in the original text related to the target position; an attention module for performing weighted transformation on the sequence of the semantic characterization vectors according to the target skill subject distribution vector; a decoder module for predicting a skill subject label of a current sentence according to the weighted and transformed sequence of semantic characterization vectors; and predicting the current word of the target recruitment position description text according to the skill subject label.
  • Specifically, the description text generating sub-model 420 may include an encoder module 401, an attention module 402, and a decoder module 403. The encoder module 401 generates a sequence of semantic characterization vectors of the current sentence in the original text related to the target position, the sequence of semantic characterization vectors may be represented by H, and a bidirectional cyclic neural network may be used for the input sequence X= {x _ 1, x_2,... , x_M} to obtain a sequence of semantic characterizations H = h 1 d , h 2 d , , h M d d ,
    Figure imgb0005
    where h M d d
    Figure imgb0006
    represents the vector of semantic characterizations of any word in the input sequence, and Md represents the number of words. The attention module 402 acquires the target skill subject distribution vector θs of the first subject distribution characteristic calculating sub-module 3035, and performs weighted transformation on the semantic characterization sequence H according to the target skill subject distribution vector θs . The decoder module 403 receives the weighted and transformed sequence of semantic characterization vectors, and may use two one-way circular neural networks to model the prediction of the skill subject label of the current sentence, and further obtain the prediction of the current word of the target recruitment position description text according to the skill subject label. The skill subject label may be represented by tj , and the current word of the target recruitment post description text may be represented by sj,k . By dividing the description text generating sub-model 420 into modules, the skill subject label of the target recruitment position and the current word of the description text may be automatically predicted, so that manual operation is reduced, and efficiency in forming the recruitment statement is improved.
  • In the present embodiment, optionally, the encoder module includes a word vector generating sub-module for generating a word vector of each word included in the current sentence of the original text related to the target position; a first cyclic neural network sub-module for generating a sequence of semantic characterization vectors of the current sentence according to each word vector.
  • Specifically, the encoder module 401 may include a word vector generating sub-module 4011 and a first cyclic neural network sub-module 4012, and the word vector generating sub-module 4011 may generate a word vector of each word included in the current sentence based on the original text related to the target position, which may be represented by e k d
    Figure imgb0007
    . The first cyclic neural network sub-module 4012 receives the word vector e k d
    Figure imgb0008
    of the word vector generating sub-module 4011, and generates a sequence of semantic characterization vectors H of the current sentence. Accurate calculation of the semantic characterization vector sequence is realized, human resource and time are saved, and generation efficiency of target recruitment position description text is improved.
  • In the present embodiment, optionally, the attention module includes a first attention sub-module and a second attention sub-module. The decoder module includes a subject predicting sub-module and a text generating submodule. The first attention sub-module is configured to perform weighted transformation on the semantic characterization vector sequence according to the target skill subject distribution vector and the hidden layer feature state vector in the subject prediction sub-module to obtain the weighted and transformed first vector sequence. The second attention sub-module is configured to perform weighted transformation on the semantic characterization vector sequence according to the target skill subject distribution vector and the hidden layer feature state vector in the text generation sub-module to obtain a weighted and transformed second vector sequence. The subject predicting sub-module is configured for predicting a skill subject label of the current sentence based on a target skill subject distribution vector and the first vector sequence. The text generating sub-module is configured for predicting a current word in the target recruitment position description text based on the skill subject label of the current sentence and the second vector sequence.
  • Specifically, the attention module 402 may include a first attention submodule 4021 and a second attention sub-module 4022, and the decoder module 403 may include a subject predicting sub-module 4031 and a text generating sub-module 4032. The first attention sub-module 4021 acquires the target skill subject distribution vector θs of the first subject distribution feature sub-module 3035, and performs weighted transformation on the semantic characterization vector sequence H according to the hidden layer feature state vector in the subject predicting sub-module 4031 to obtain the weighted and transformed first vector sequence. The hidden layer feature state vector in the subject predicting sub-module 4031 may be represented by h j t ,
    Figure imgb0009
    and the first vector sequence may be represented by u j t .
    Figure imgb0010
    The second attention sub-module 4022 acquires the target skill subject distribution vector θs of the first subject distribution feature submodule 3035, and performs weighted transformation on the semantic characterization vector sequence H according to the hidden layer feature state vector in the text generating sub-module 4032 to obtain the weighted and transformed second vector sequence. The hidden layer feature state vector in the text generating sub-module 4032 may be represented by h j , k c ,
    Figure imgb0011
    and the second vector sequence may be represented by u j , k c .
    Figure imgb0012
  • The subject predicting sub-module 4031 predicts the skill subject label tj of the current sentence based on the target skill subject distribution vector θs and the first vector sequence u j t .
    Figure imgb0013
    The text generating sub-module 4032 obtains the skill subject label tj and the second vector sequence u j , k c
    Figure imgb0014
    of the current sentence output by the subject predicting sub-module 4031, and predicts the current word sj,k in the target recruitment position description text. For example, the prediction of the current word of the skill subject label and the target recruitment position description text may be performed using the following formula: p t j | t < j , H , θ s = soft max W t h j t u j t θ s + b t
    Figure imgb0015
    p y j , k | y < j , y j , < k , H , θ s , t j = soft max W c h j , k c u j , k c θ s + b c
    Figure imgb0016
  • Where, p(tj |t<j, H, θs ) represents the prediction probability of the skill subject label, p (yj,k |y<j , yj,<k , H, θs, tj ) represents the prediction probability of the current word of the target recruitment position description text, Wt, Wc , bt and bc . are pre-set parameters.
  • The first vector sequence u j t
    Figure imgb0017
    and the second vector sequence u j , k c
    Figure imgb0018
    may be calculated by: u j t = l = 1 N α l , j t h l d
    Figure imgb0019
    u j , k c = l = 1 M j c α l , j , k c h l d
    Figure imgb0020
  • Where, N refers to the number of sentences of the target recruitment position description text, M j c
    Figure imgb0021
    refers to the number of words in the jth sentence, h l d
    Figure imgb0022
    refers to the semantic characterization vector of the word in the 1th sentence, and α l , j t
    Figure imgb0023
    and α l , j , k c
    Figure imgb0024
    are intermediate variables in the calculation of the attention mechanism, calculated by the following formula: α l , j t = exp g l , j t p = 1 N exp g p , j t
    Figure imgb0025
    α l , j , k c = exp g l , j , k c p = 1 M j c exp g p , j , k c
    Figure imgb0026
  • Where, g l , j t , g p , j t , g j , j , k c
    Figure imgb0027
    and g p , j , k c
    Figure imgb0028
    are vectors of network intermediate layer.
  • The formula for calculating g l , j t
    Figure imgb0029
    and g l , j , k c
    Figure imgb0030
    are as follows: g l , j t = v α t T tanh W α t h l d h j t θ s + b α t
    Figure imgb0031
    g l , j , k c = v α c T tanh W α c h l d h j , k c θ s + b α c
    Figure imgb0032
  • Where, Wαt , Wαc v α t T , v α c T , b α t
    Figure imgb0033
    and bαc , are pre-set parameters.
  • The calculation of g p , j t
    Figure imgb0034
    and g p , j , k c
    Figure imgb0035
    are similar to the calculation methods of g l , j t
    Figure imgb0036
    and g p , j , k c
    Figure imgb0037
    described above respectively, the formula of g l , j t
    Figure imgb0038
    in which / is replaced by p is the calculation formula of g p , j t ,
    Figure imgb0039
    , and the formula of g l , j , k c
    Figure imgb0040
    in which / is replaced by p is the calculation formula of g p , j , k c .
    Figure imgb0041
    . p and / are arguments of the accumulation function, whose values are selected in [1, N].
  • For subject label tj , the following equation is used: t j 1 = argmax l θ s l k = 1 M j 1 s β * , s j 1 , k s l l = 1 K s
    Figure imgb0042
  • Where t j-1 represents the subject label of the previous sentence, i.e., the (j-1)th sentence, M j 1 s
    Figure imgb0043
    represents the number of words in the (j-1)th sentence in the target recruitment position description text, ks represents the number of subjects, and β * , s j 1 , k s
    Figure imgb0044
    is a vector expression of the previous sentence by the first skill subject word distribution parameter.
  • By dividing the attention module 402 and the decoder module 403, the determination of the current words in the skill subject label and the description text is completed, the calculation accuracy is improved, the automatic generation of the target recruitment position description text is realized, the labor cost is reduced, and the recruitment efficiency is improved.
  • In the present embodiment, optionally, the subject predicting sub-module includes a second cyclic neural network sub-module for obtaining a first sequence of feature vectors based on a hidden layer feature state vector of the cyclic neural network predicting the previous sentence in the text generating sub-module, an embedded characterization vector corresponding to a skill subject label of the previous sentence, and a target skill subject distribution vector; a subject generating sub-module for predicting a skill subject label of a current sentence based on the first sequence feature vector and the first vector sequence.
  • Specifically, the subject predicting sub-module 4031 may include the second cyclic neural network sub-module 40311 and the subject generating sub-module 40312. The second cyclic neural network sub-module 40311 obtains the hidden layer feature state vector of the cyclic neural network predicting the previous sentence in the text generating sub-module 4032, obtains the embedded characterization vector corresponding to the skill subject label of the previous sentence, and obtains the target skill subject distribution vector θs , and obtains the first sequence feature vector by calculation. The hidden layer feature state vector of the cyclic neural network of the previous sentence may be represented by h j 1 , M j 1 c c ,
    Figure imgb0045
    the skill subject label of the previous sentence may be represented by t j-1, the embedded characterization vector corresponding to the skill subject label of the previous sentence may be represented by e j 1 t ,
    Figure imgb0046
    , and the first sequence feature vector may be represented by h j t .
    Figure imgb0047
    . For example, LSTM (Long Short-Term Memory) may be used for calculation, and the formula is as follows: h j t = LSTM e j 1 t θ s h j 1 , M j 1 c c h j 1 t
    Figure imgb0048
    e j t = W e t t j
    Figure imgb0049
  • Where h j 1 t
    Figure imgb0050
    represents the first sequence feature vector of the previous sentence, e j t
    Figure imgb0051
    represents the embedded characterization vector corresponding to the skill subject label of the current sentence, tj represents the skill subject label of the current sentence, and Wet represents the pre-set parameter.
  • The subject generating sub-module 40312 acquires the first sequence feature vector h j t
    Figure imgb0052
    of the second cyclic neural network sub-module 40311, and predicts the skill subject label tj of the current sentence based on the first vector sequence u j t
    Figure imgb0053
    of the first attention sub-module 4021. By calculating the first sequence feature vector h j t ,
    Figure imgb0054
    the prediction accuracy of the skill subject label of the current sentence is improved, and generation of the target recruitment position description text is facilitated.
  • In the present embodiment, optionally, the text generating sub-module includes: a third cyclic neural network sub-module for obtaining a second sequence feature vector based on the first sequence feature vector and the predicted word embedding characterization vector of the previous word; an intermediate processing submodule for obtaining a pre-generated word probability vector according to the second vector sequence and the second sequence feature vector; and a copy mechanism submodule configured to process the pre-generated word probability vector based on the first skill subject word distribution parameter to obtain the current word in the predicted target recruitment position description text.
  • Specifically, the text generating sub-module 4032 may include a third cyclic neural network sub-module 40321, an intermediate processing sub-module 40322, and a copy mechanism sub-module 40323. The third cyclic neural network submodule 40321 obtains the first sequence feature vector h j t
    Figure imgb0055
    and the predicted word embedding characterization vector of the previous word, to obtain the second sequence feature vector. The predicted word embedding characterization vector of the previous word may be represented by e j , k 1 c ,
    Figure imgb0056
    and the second sequence feature vector may be represented by h j , k c .
    Figure imgb0057
    For example, LSTM may be used for calculation, and the formula is as follows: h j , k c = LSTM e j , k 1 c θ s h j t h j , k 1 c
    Figure imgb0058
    e j , k c = W e c y j , k
    Figure imgb0059
  • Where h j , k 1 c
    Figure imgb0060
    represents a second sequence feature vector of a previous word, e j , k c
    Figure imgb0061
    represents a word embedding characterization vector of a current word, yj,k represents a pre-generated word probability vector, Wec represents a pre-set parameter.
  • The intermediate processing sub-module 40322 obtains a pre-generated word probability vector based on the second vector sequence u j , k c
    Figure imgb0062
    and the second sequence feature vector h j , k c ,
    Figure imgb0063
    and the pre-generated word probability vector may be represented by yj,k. The copy mechanism sub-module 40323 processes the pre-generated word probability vector based on the first skill subject word distribution parameter to obtain the current word s j,k in the predicted target recruitment position description text. The first skill subject word distribution parameter may be pre-defined, represented by βs . By dividing the text generating sub-module 4032 into three submodules, the description text of the target recruitment position is automatically generated, the matching accuracy between the description text and the target position is improved, and the generation efficiency of the description text is improved.
  • One embodiment of the above-mentioned application has the advantage that the automatic generation of the target recruitment position description text is realized by dividing the description text generation sub-module into an encoder module, an attention module and a decoder module. The problem of art manually extraction of position information by human resource employees in the prior is solved, the subjectivity of people is reduced, the time and cost for generating a recruitment position description text is saved, errors caused by the fact that human resource employees have field gaps for professional skills of different positions are avoided, accurate matching of recruitment positions with personnel to be recruited is facilitated, and recruitment efficiency is improved.
  • FIG. 5 is an illustrative flow chart of a method of training the deep neural network model according to an embodiment of the present disclosure, which is further optimized on the basis of the above-described embodiment to train the deep neural network to generate recruitment position description text. The method may be performed by a training apparatus of the deep neural network model, which may be implemented in software and/or hardware, and may be integrated in an electronic device having a computing capability. As shown in FIG. 5, the training method of the deep neural network model provided in the present embodiment may include following steps.
  • S510 includes obtaining a first training sample data, and using the first training sample data to preliminarily train the pre-constructed text subject predicting sub-model to obtain the preliminary trained text subject predicting sub-model; where the first training sample data includes a first sample-related text of the first sample position and a first standard recruitment position description text corresponding to the first sample position;
  • At least two types of training sample data are collected and may be divided into a first training sample data and a second training sample data. The first training sample data may include a first sample-related text of the first sample position and a first standard recruitment position description text corresponding to the first sample position. The first sample-related text may include at least one of a resume text of a person that has been determined to meet a position requirement of the first sample position, a text including position responsibility data, and a text including project data related to position, and the first standard recruitment position description text is a standard recruitment position description text corresponding to the first sample position that has been edited.
  • FIG. 6 is an illustrative structural diagram of a deep neural network model in which a text subject prediction sub-model 610 and a description text generation sub-model 620 are pre-constructed. The text subject predicting sub-model 610 is initially trained by the first training sample data, and the training result is corrected according to the first standard recruitment position description text to obtain the initially trained text subject predicting sub-model 610.
  • In the present embodiment, optionally, on the basis of the foregoing embodiment, the text subject predicting sub-model further includes: a second subject distribution determining module for obtaining an original skill subject distribution vector according to a skill subject vector distribution parameter; a first text reconstruction submodule for generating predicted bag-of-word characterization data of the reconstructed original text related to the target position according to the second skill subject word distribution parameter and the original skill subject distribution vector; and a second text reconstruction sub-module for generating the reconstructed predictive bag-of-word characterization data of the standard recruitment position description text according to the first skill subject word distribution parameter and the target skill subject distribution vector.
  • Specifically, the text subject prediction sub-model 610 may include a bag-of-word feature extracting module 601, a distribution parameter calculating module 602, a first subject distribution determining module 603, a second subject distribution determining module 604, a first text reconstruction sub-module 605, and a second text reconstruction sub-module 606. The second subject distribution determining module 604 receives a skill subject vector distribution parameter µd and σd of the distribution parameter calculating module 602, and calculates an original skill subject distribution vector, which may be represented by θd .
  • The second subject distribution determining module 604 may include a second subject vector sampling sub-module 6041, a third fully connected network submodule 6042, and a second subject distribution feature calculation sub-module 6043. The second subject vector sampling sub-module 6041 is configured to obtain the second skill subject vector for the first skill subject vector distribution sub-parameter and the second skill subject vector distribution sub-parameter. The second skill subject vector may be calculated based on µd and σd , and the second skill subject vector may be represented by zd . The third fully connected network sub-module 6042 is configured to perform feature extraction for the second skill subject vector to obtain the second subject feature vector. For example, the subject vector z d N µ d σ d 2
    Figure imgb0064
    may be obtained by sampling. The third fully connected network sub-module 6042 may include one or more layers of fully connected networks. The fully connected network in the third fully connected network sub-module 6042 may be represented by fθd .
  • The second subject distribution feature calculating sub-module 6043 receives the second subject feature vector of the third fully connected network submodule 6042, and obtains the original skill subject distribution vector according to the pre-set second activation function. The second activation function may be represented by soft max (fθd ) , and the original skill subject distribution vector may be represented by θd , where θd = soft max (fθd (zd )) .
  • The first text reconstruction sub-module 605 obtains the original skill subject distribution vector θd , and obtains the reconstructed predicted bag-of-word characterization data of the original text related to the target position according to the pre-defined second skill subject word distribution parameter. The second skill subject word distribution parameter may be represented by βd , and the prediction probability of the bag-of-word characterization data of the first text reconstruction sub-module 605 is calculated by: p x l = θ d β d
    Figure imgb0065
  • where p(x,l) represnts prediction probability of the bag-of-word characterization data of the original text related to the target position.
  • The second text reconstruction sub-module 606 obtains the predicted bag-of-word characterization data of the reconstructed standard recruitment position description text according to the first skill subject word distribution parameter and the target skill subject distribution vector θs , and the first skill subject word distribution parameter may be represented by βs . The bag-of-word characterization data prediction probability of the second text reconstruction sub-module 606 is calculated by: p s j | θ s , β s = θ s k = 1 M j s β * , s j , k s
    Figure imgb0066
  • where, p(sj |θs ,βs ) represents the prediction probability of the bag-of-word characterization data of the standard recruitment position description text, M j s
    Figure imgb0067
    represents the number of words in the jth sentence after the bag-of-word feature selection, and β * , s j , k s
    Figure imgb0068
    represents vector expression of the current sentence by the first skill subject word distribution parameter βs .
  • By dividing the text subject predicting sub-model 610 into a second subject distribution determining module 604, a first text reconstruction sub-module 605, and a second text reconstruction sub-module 606, accurate training of the text subject predicting sub-model 610 is realized, and the accuracy of the text subject prediction is improved, thereby improving the generation efficiency of the target recruitment position recruitment text.
  • In the present embodiment, optionally, using the first training sample data to preliminarily training the pre-constructed text subject predicting sub-model to obtain a preliminary trained text subject predicting sub-model, which may include: inputting the first sample-related text into the pre-constructed text subject predicting sub-model; calculating a first loss function value based on a first disparity information and a second disparity information by using a neural variation method; and adjusting the network parameters in the pre-constructed text subject predicting sub-model according to the calculated first loss function value until reaching the threshold of the number of iterations or the convergence of the value of the loss function, where the first disparity information is a disparity information between a first predictive bag-of-word characterization data output by the first text reconstruction sub-module and the bag-of-word characterization data of the text related to the first sample output by the bag-of-word feature extracting module, and the second disparity information is disparity information between the second predictive bag-of-word characterization data output by the second text reconstruction sub-module and the bag-of-word characterization data of the first standard recruitment position description text.
  • Specifically, the first sample-related text is input to the pre-constructed text subject predicting sub-model 610, the bag-of-word feature extracting module 601 outputs the bag-of-word characterization data X i bow
    Figure imgb0069
    of the first sample-related text, the first text reconstruction sub-module 605 outputs the first predicting bag-of-word characterization data X i ʹbow ,
    Figure imgb0070
    and the disparity information between X i bow
    Figure imgb0071
    and X i ʹbow
    Figure imgb0072
    is the first disparity information. The second prediction bag-of-word characterization data is output by the second text reconstruction sub-module 606, and the disparity information between the second prediction bag-of-word characterization data and the bag-of-word characterization data of the first standard recruitment position description text is used as the second disparity information. After obtaining the first disparity information and the second disparity information, the first loss function value is calculated by the neural variational method, and the network parameters in the text subject prediction sub-model 610 are adjusted according to the first loss function value until reaching the threshold value of the number of iteration or the convergence of the value of the loss function, so that the bag-of-word characterization data output by the text subject predicting sub-model 610 meets the requirement of the bag-of-word characterization data of the first standard recruitment position description text.
  • The calculation formula of the first loss function value is as follows: k = 1 M d log θ d β * , x k d + D KL q θ d p θ d j = 1 N log θ s k = 1 M j s β * , s j , k s + D KL q θ s p θ s
    Figure imgb0073
  • Where, DKL represents a Kullback-Leiblerdivergence distance (relative entropy distance), β * , x k d
    Figure imgb0074
    represents a vector expression of the current sentence, i.e., the kth word by the second skill subject word distribution parameter βd, β * , s j , k s
    Figure imgb0075
    represents a vector expression of the jth sentence by the first skill subject word distribution parameter, p(θd ) and p(θs ) represent actual probability distribution of the data, and q(θd ) and q(θs ) represent an estimated probability distribution function of a neural variation approximation.
  • By calculating the disparity information and the first loss function value, the preliminary training of the text subject predicting sub-model 610 is completed, and the accuracy of the text subject prediction is achieved.
  • S520 includes obtaining a second training sample data; where the second training sample data includes a second sample related text of the second sample position and a second standard recruitment position description text corresponding to the second sample position.
  • After obtaining the preliminarily trained text subject predicting sub-model 610, the acquired second training sample data is obtained. The second sample-related text includes at least one of a resume text of a person that has been determined to meet a post requirement of the second sample post, a text containing position responsibility data, and a text containing project data related to position. The second standard recruitment position description text is a standard recruitment post description text corresponding to the edited second sample post.
  • S530 includes using the second training sample data to train the deep neural network model including the initially trained text subject predicting sub-model and the pre-constructed description text generating sub-model, to obtain the trained deep neural network model.
  • The preliminarily trained text subject predicting sub-model 610 and the pre-constructed description text generating sub-model 620 are trained by the second training sample data, and the output result of the deep neural network model is corrected according to the second standard recruitment position description text to obtain the trained deep neural network model.
  • Optionally, in the present embodiment, using the second training sample data to train the deep neural network model includes:
    • making the second sample related text input include the deep neural network model of the preliminary trained text subject predicting sub-model and the pre-constructed description text generating sub-model;
    • calculating a second loss function value based on the third disparity information and the fourth disparity information; where the third disparity information is the disparity between the first predictive bag-of-word characterization data output by the first text reconstruction sub-module and the bag-of-word characterization data of the second sample related text output by the bag-of-word feature extracting module, and the fourth disparity information is the disparity information between the second predictive bag-of-word characterization data output by the second text reconstruction sub-module and the bag-of-word characterization data of the second standard recruitment position description text;
    • calculating a third loss function value based on the fifth disparity information; where the fifth disparity information is the disparity information between the second standard recruitment position description text and the text output by the description text generating sub-model;
    • determining an overall loss function value according to the calculated second loss function value and the third loss function value, and
    • adjusting network parameters in the text subject predicting sub-model and the description text generating sub-model according to the overall loss function value until reaching the threshold of the number of iteration or the convergence of the overall loss function value.
  • Specifically, after obtaining the preliminary trained text subject predicting sub-model 610, the second sample related text is input to the text subject predicting sub-model 610 and the description text generating sub-model 620. The first prediction bag-of-word characterization data is output by the first text reconstruction sub-module 605 in the text subject predicting sub-model 610, the bag-of-word characterization data of the second sample related text is output by the bag-of-word feature extracting module 601 in the text subject predicting sub-model 610, and the disparity information between the first prediction bag-of-word characterization data and the bag-of-word characterization data of the second sample related text is used as the third disparity information. The second predictive bag-of-word characterization data is output by the second text reconstruction sub-module 606, and the disparity information between the second predictive bag-of-word characterization data and the bag-of-word characterization data of the second standard recruitment position description text is used as the fourth disparity information. After obtaining the third disparity information and the fourth disparity information, the second loss function value may be calculated by using a neural variation method.
  • The description text is output by the description text generating sub-model 620, the disparity information between the second standard recruitment position description text and the output description text is used as the fifth disparity information, and the third loss function value is calculated based on the fifth disparity information. Determining an overall loss function value based on the calculated second loss function value, the third loss function value, and a corresponding weight. The network parameters in the text subject predicting sub-model and the description text generating sub-model are adjusted according to the total loss function value until reaching the threshold value of the number of iterations or the total loss function value convergence, so that the deep neural network model 600 can output recruitment position description text meeting the requirements. The calculation of the total loss function for the text subject predicting sub-model 610 and the description text generating sub-model 620 improves the accuracy of the generation of description text of the deep neural network model 600, avoids inaccuracy of the description text due to subjectivity and field differences, and improves the description text generation efficiency.
  • FIG. 7 is an illustrative schematic diagram of a deep neural network model 700 including a text subject predicting sub-model 710 and a description text generating sub-model 720, according to an embodiment of the present disclosure.
  • The text subject predicting sub-model 710 includes a bag-of-word feature extracting module 701, a distribution parameter calculating module 702 and a first subject distribution determining module 703, a second subject distribution determining module 704, a first text reconstruction sub-module 705, and a second text reconstruction sub-module 706. The bag-of-word feature extracting module 701 includes a bag-of-word generating sub-module 7011 and a first fully connected network sub-module 7012. The distribution parameter calculating module 702 includes a first parameter calculating submodule 7021 and a second parameter calculating sub-module 7022. The first subject distribution determining module 703 includes a third parameter calculating sub-module 7031, a fourth parameter calculating sub-module 7032, a first subject vector sampling sub-module 7033, a second fully connected network sub-module 7034 and a first subject distribution feature calculating sub-module 7035. The second subject distribution determining module 704 includes a second subject vector sampling sub-module 7041, a third fully connected network sub-module 7042 and a second subject distribution feature calculating sub-module 7043. The predicted bag-of-word characterization data of the reconstructed standard recruitment position description text is represented as s j bow .
    Figure imgb0076
    .
  • The description text generating sub-model 720 includes an encoder module 707, an attention module 708, and a decoder module 709. The encoder module 707 includes a word vector generating sub-module 7071 and a first cyclic neural network sub-module 7072. The attention module 708 includes a first attention sub-module 7081 and a second attention sub-module 7082. The decoder module 709 includes a subject prediction sub-module 7091 and a text generating sub-module 7092. The subject predicting sub-module 7091 includes a second cyclic neural network sub-module 70911 and a subject generating sub-module 70912. The text generating sub-module 7092 includes a third cyclic neural network sub-module 70921, an intermediate processing sub-module 70922, and a copy mechanism sub-module 70923. The kth word in the original text related to the target position or the sample related text is represented as xk .
  • One embodiment of the above-mentioned application has the advantage of achieving preliminary training of the text subject prediction sub-model by obtaining the first training sample data; By obtaining the second training sample data, a deep neural network model is further trained, so that the description text output by the deep neural network model meets the requirements of the standard text, the accuracy of the description text is improved, and the output efficiency of the target position description text is further improved.
  • FIG. 8 is a schematic structural diagram of an apparatus for generating a recruitment position description text according to an embodiment of the present disclosure. The method for generating a recruitment position description text according to the embodiment of the present disclosure ma be executed, and the apparatus has a function module and a beneficial effect corresponding to the execution method. As shown in FIG. 8, the apparatus 800 may include: an original text obtaining module 801, configured to obtain an original text related to a target position; and a description text generating module 802 configured to generate the target recruitment position description text corresponding to the target position based on the original text related to the target position and the pre-trained deep neural network model.
  • Optionally, the original text related to the target position includes at least one of a resume text of a person determined to meet a position requirement, a text containing position responsibility data, and a text containing project data related to position.
  • Alternatively, the deep neural network model includes: a text subject predicting sub-model for predicting a target skill subject distribution vector based on the original text related to the target position; and a description text generating sub-model for generating a target recruitment position description text of the target position according to the target skill subject distribution vector.
  • Alternatively, the text subject predicting sub-model includes: a bag-of-word feature extracting module for extracting a bag-of-word feature vector of the original text related to the target position; a distribution parameter calculating module for calculating a skill subject vector distribution parameter according to the bag-of-word feature vector and a non-linear network layer; a first subject distribution determining module for obtaining a target skill subject distribution vector according to the skill subject vector distribution parameter and a pre-set subject distribution hypothesis parameter.
  • Optionally, the bag-of-word feature extracting module includes: a bag-of-word generating sub-module for generating bag-of-word characterization data of the original text related to the target position; and a first fully connected network sub-module for performing feature extraction for the bag-of-word characterization data to obtain a bag-of-word feature vector.
  • Optionally, the distribution parameter calculating module includes: a first parameter calculating sub-module for calculating a first skill subject vector distribution sub-parameter according to the bag-of-word feature vector and the first non-linear network layer; and a second parameter calculating sub-module for calculating a second skill subject vector distribution sub-parameter based on the bag-of-word feature vector and the second non-linear network layer.
  • Optionally, the first subject distribution determining module includes: a third parameter calculating sub-module for calculating a third skill subject vector distribution parameter according to the first skill subject vector distribution sub-parameter and the first pre-set subject distribution hypothesis sub-parameter; a fourth parameter calculating sub-module for calculating a fourth skill subject vector distribution parameter according to the second skill subject vector distribution sub-parameter and the second pre-set subject distribution hypothesis sub-parameter; a first subject vector sampling sub-module for obtaining a first skill subject vector according to a third skill subject vector distribution parameter and a fourth skill subject vector distribution parameter; a second fully connected network sub-module for performing feature extraction for the first skill subject vector to obtain the first subject feature vector; and a first subject distribution feature calculating sub-module for obtaining a target skill subject distribution vector based on the first subject feature vector and the first activation function.
  • Optionally, the description text generating sub-model includes: an encoder module for generating a sequence of semantic characterization vectors of the current sentence in the original text related to the target position; an attention module for performing weighted transformation for the sequence of the semantic characterization vectors according to the target skill subject distribution vectors; a decoder module for predicting a skill subject label of a current sentence according to the weighted and transformed semantic characterization vector sequence, and predicting the current word of the target recruitment position description text according to the skill subject label.
  • Optionally, the encoder module includes: a word vector generating submodule for generating a word vector of each word included in the current sentence in the original text related to the target position; and a first cyclic neural network sub-module for generating a sequence of semantic characterization vectors of the current sentence according to each word vector.
  • Optionally, the attention module includes a first attention sub-module and a second attention sub-module; The decoder module includes a subject predicting submodule and a text generating sub-module. The first attention sub-module is configured for performing weighted transformation on the semantic characterization vector sequence according to the target skill subject distribution vector and the hidden layer feature state vector in the subject predicting sub-module to obtain a weighted and transformed first vector sequence. The second attention submodule is configured for performing weighted transformation on the semantic characterization vector sequence according to the target skill subject distribution vector and the hidden layer feature state vector in the text generating sub-module to obtain a weighted and transformed second vector sequence.
  • The subject predicting sub-module is configured for predicting a skill subject label of a current sentence based on a target skill subject distribution vector and a first vector sequence. The text generating sub-module is configured for predicting the current word in the target recruitment position description text based on the skill subject label of the current sentence and the second vector sequence.
  • Optionally, the subject predicting sub-module includes: a second cyclic neural network sub-module and a subject generating sub-module. The second cyclic neural network sub-module is configured for obtaining a first sequence feature vector, based on a hidden layer feature state vector of a cyclic neural network predicting a previous sentence in the text generating sub-module, an embedded characterization vector corresponding to a skill subject label of the previous sentence, and a target skill subject distribution vector. The subject generating sub-module is configured for predicting a skill subject label of a current sentence based on the first sequence feature vector and the first vector sequence.
  • Optionally, the text generation sub-module includes: a third circulating neural network sub-module for obtaining a second sequence feature vector based on the first sequence feature vector and the predicted word embedding characterization vector of the previous word; an intermediate processing sub-module for obtaining a pre-generated word probability vector according to the second vector sequence and the second sequence feature vector; and a copy mechanism sub-module for processing the pre-generated word probability vector based on the first skill subject word distribution parameter, to obtain the current word in the predicted target recruitment position description text.
  • Optionally, the training process of the deep neural network model includes:
    • acquiring first training sample data; and using the first training sample data to preliminarily train the pre-constructed text subject prediction sub-model to obtain a preliminary trained text subject prediction sub-model; where the first training sample data includes a first sample-related text of the first sample position and a first standard recruitment position description text corresponding to the first sample position;
    • acquiring second training sample data; where the second training sample data includes a second sample related text of the second sample position and a second standard recruitment position description text corresponding to the second sample position; and
    • using the second training sample data to train a deep neural network model including a preliminary trained text subject prediction sub-model and a pre-constructed description text generation sub-model to obtain a trained deep neural network model.
  • Optionally, the text subject prediction sub-model further includes: a second subject distribution determining module for obtaining an original skill subject distribution vector based on the skill subject vector distribution parameter; a first text reconstruction sub-module for generating predicted bag-of-word characterization data of the reconstructed original text related to the target position, based on the second skill subject word distribution parameter and the original skill subject distribution vector; and a second text reconstruction sub-module for generating the predictive bag-of-word characterization data of the reconstructed standard recruitment position description text, based on the first skill subject word distribution parameter and the target skill subject distribution vector.
  • Optionally, the process of preliminary training is performed on the pre-constructed text subject prediction sub-model by using the first training sample data, to obtain a preliminary trained text subject prediction sub-model, including:
    • inputting a first sample-related text into a pre-constructed text subject prediction sub-model;
    • calculating a first loss function value by using a neural variation method, based on the first disparity information and the second disparity information; where the first disparity information is disparity information between first predictive bag-of-word characterization data output by the first text reconstruction sub-module and the bag-of-word characterization data of the text related to the first sample output by the bag-of-word feature extraction module, and the second disparity information is disparity information between the second predictive bag-of-word characterization data output by the second text reconstruction sub-module and the bag-of-word characterization data of the first standard recruitment position description text; and
    • adjusting the network parameters in the pre-constructed text subject prediction sub-model based on the calculated first loss function value until reaching the threshold number of iterations or the convergence of the value of the loss function.
  • Optionally, using the second training sample data to train the deep neural network model including the initially trained text subject predicting sub-model and the pre-constructed description text generating sub-model, to obtain the trained deep neural network model includes:
    • inputting a second sample-related text input including a pre-trained text subject predicting sub-model and a pre-constructed deep neural network model describing the text generating sub-model;
    • calculating a second loss function value based on the third disparity information and the fourth disparity information; where the third disparity information is the disparity between the first predictive bag-of-word characterization data output by the first text reconstruction sub-module and the bag-of-word characterization data of the second sample related text output by the bag-of-word feature extracting module, and the fourth disparity information is the disparity information between the second predictive bag-of-word characterization data output by the second text reconstruction sub-module and the bag-of-word characterization data of the second standard recruitment position description text;
    • calculating a third loss function value based on the fifth disparity information; where the fifth disparity information is the disparity information between the second standard recruitment position description text and the text output by the description text generating sub-model;
    • determining an overall loss function value based on the calculated second loss function value and the third loss function value, and adjusting network parameters in the text subject prediction sub-model and the description text generating sub-model based on the overall loss function value, until reaching the threshold of the number of iterations or the convergence of an overall loss function value.
  • One embodiment of the above application has the advantage or good effects of automatically extracting the data in the original text related to the target position through the deep neural network model, to obtain the target recruitment position description text corresponding to the target position. The method solves the problems that in the prior art, position information is manually extracted by a human resource employee, the recruitment position description text is manually written, thus human subjectivity is reduced, generation time and cost of recruitment position description text are saved, errors caused by the fact that human resource employees have field gaps for professional skills of different positions are avoided, accurate matching between recruitment positions and personnel to be recruited is realized, and generation efficiency and recruitment efficiency of recruitment position description text are improved.
  • According to an embodiment of the present disclosure, the present disclosure also provides an electronic device and a readable storage medium.
  • As shown in FIG. 9, FIG. 9 is a block diagram of an electronic device of a method for generating a recruitment position description text according to an embodiment of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptop computers, desktop computers, worktables, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions are by way of example only and are not intended to limit the implementation of the present disclosure as described and/or claimed herein.
  • As shown in FIG. 9, the electronic device includes one or more processor 901, a memory 902, and an interface for connecting components, including a high speed interface and a low speed interface. The various components are interconnected by
  • different buses and may be mounted on a common motherboard or otherwise as desired. The processor may process instructions executed within the electronic device, including instructions stored in or on a memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to an interface. In other embodiments, multiple processors and/or multiple buses may be used with multiple memories and multiple memories, if desired. Similarly, a plurality of electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). A processor 901 is exemplified in FIG. 9.
  • The memory 902 is a non-transitory computer readable storage medium provided in this application. The memory stores instructions executable by at least one processor to cause the at least one processor to perform the generation method of the recruitment post description text provided in the present disclosure. The non-transitory computer-readable storage medium of the present disclosure stores computer instructions for causing a computer to execute the generation method of the recruitment position description text provided in the present disclosure.
  • The memory 902, as a non-transitory computer readable storage medium, can be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the generation method of the recruitment position description text in the embodiment of the present disclosure. The processor 901 executes various functional applications and data processing of the server by running non-transitory software programs, instructions, and modules stored in the memory 902, that is, implements the generation method of the recruitment post description text in the above-described method embodiment.
  • The memory 902 may include a storage program area and a storage data area, where the storage program area may store an operating system, an application program required by at least one function; the storage data area may store data or the like created according to the use of the electronic device of the generation method of the recruitment post description text. In addition, memory 902 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 902 may optionally include remotely disposed memory relative to processor 901, which may be connected via a network to an electronic device of a method of generating recruitment position description text. Examples of such networks include, but are not limited to, the Internet, enterprise intranets, local area networks, mobile communication networks, and combinations thereof.
  • The electronic device for generating the recruitment position description text may further include an input device 903 and an output device 904. The processor 901, the memory 902, the input device 903, and the output device 904 may be connected via a bus or other manners, a bus connection is illustrated in FIG. 9.
  • The input device 903 may receive input digit or character information, and generate key signal input related to user settings and functional control of an electronic device of the recruitment position description text generation method, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer bar, one or more mouse buttons, a track ball, a joystick, or the like. The output device 904 may include a display device, an auxiliary lighting device (e.g., an LED), a tactile feedback device (e.g., a vibration motor), and the like. The display device may include, but is not limited to, a liquid crystal display (LCD), a light emitting diode (LED) display, and a plasma display. In some embodiments, the display device may be a touch screen.
  • The various embodiments of the systems and techniques described herein may be implemented in digital electronic circuit systems, integrated circuit systems, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs that may execute and/or interpret on a programmable system including at least one programmable processor, which may be a dedicated or general purpose programmable processor, may receive data and instructions from a memory system, at least one input device, and at least one output device, and transmit the data and instructions to the memory system, the at least one input device, and the at least one output device.
  • These computing programs (also referred to as programs, software, software applications, or code) include machine instructions of a programmable processor and may be implemented in high-level procedures and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, device, and/or means (e.g., magnetic disk, optical disk, memory, programmable logic device (PLD)) for providing machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as machine-readable signals. The term "machine readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
  • To provide interaction with a user, the systems and techniques described herein may be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to a computer. Other types of devices may also be used to provide interaction with a user; for example, the feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
  • The systems and techniques described herein may be implemented in a computing system including a background component (e.g., as a data server), or a computing system including a middleware component (e.g., an application server), or a computing system including a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user may interact with embodiments of the systems and techniques described herein), or a computing system including any combination of such background component, middleware component, or front-end component. The components of the system may be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (LAN), a wide area network (WAN), a block chain network, and the Internet.
  • The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship between the client and the server is generated by a computer program running on the corresponding computer and having a clientserver relationship with each other.
  • According to the technical solution of the embodiment of the present disclosure, the data in the original text related to the target position is automatically extracted through the deep neural network model to obtain the target recruitment position description text corresponding to the target position. The method solves the problems that in the prior art, position information is manually extracted by a human resource employee, recruitment position description text is manually written, human subjectivity is reduced, generation time and cost of recruitment position description text are saved, errors caused by the fact that human resource employees have field gaps for professional skills of different positions are avoided, accurate matching between recruitment positions and personnel to be recruited is realized, and generation efficiency and recruitment efficiency of recruitment position description text are improved.
  • It is to be understood that reordering, adding or deleting of the steps may be performed when using the various forms shown above. For example, the steps described in the present disclosure may be performed in parallel or sequentially or in a different order, so long as the desired results of the technical solution disclosed in the present disclosure can be realized, and no limitation is imposed herein.
  • The foregoing detailed description is not intended to limit the scope of the present disclosure. It will be appreciated by those skilled in the art that various modifications, combinations, sub-combinations, and substitutions may be made depending on design requirements and other factors. Any modifications, equivalents, and improvements that fall within the spirit and principles of this application are intended to be included within the scope of this application.

Claims (15)

  1. A method for generating a recruitment position description text, comprising:
    obtaining (S110) an original text related to a target position; and
    generating (S120) a target recruitment position description text corresponding to the target position based on the obtained original text and a pre-trained deep neural network model.
  2. The method of claim 1, wherein the original text comprises at least one of:
    a resume text of personnel that has been determined to meet position requirements,
    a text containing position responsibility data, and
    a text containing project data related to position.
  3. The method of claim 1 or 2, wherein the deep neural network model comprises:
    a text subject predicting sub-model (210) for predicting a target skill subject distribution vector based on the original text; and
    a description text generating sub-model (220) for generating a target recruitment position description text of the target position based on the predicted target skill subject distribution vector.
  4. The method of claim 3, wherein the text subject predicting sub-model (210) comprises:
    a bag-of-word feature extracting module for extracting a bag-of-word feature vector of the original text;
    a distribution parameter calculating module for calculating a skill subject vector distribution parameter based on the extracted bag-of-word feature vector and a non-linear network layer; and
    a first subject distribution determining module for obtaining a target skill subject distribution vector based on the skill subject vector distribution parameter and a pre-set subject distribution hypothesis parameter.
  5. The method of claim 4, wherein the bag-of-word feature extracting module (301) comprises:
    a bag-of-word generating sub-module (3011) for generating bag-of-word characterization data of the original text; and
    a first fully connected network sub-module (3012) for performing feature extraction for the bag-of-word characterization data to obtain the bag-of-word feature vector.
  6. The method of claim 4 or 5, wherein the distribution parameter calculating module (302) comprises:
    a first parameter calculating sub-module (3021) for calculating a first skill subject vector distribution sub-parameter based on the bag-of-word feature vector and a first non-linear network layer; and
    a second parameter calculating sub-module (3022) for calculating a second skill subject vector distribution sub-parameter based on the bag-of-word feature vector and a second non-linear network layer, wherein the first subject distribution determining module (303) preferably comprises:
    a third parameter calculating sub-module (3031) for calculating a third skill subject vector distribution parameter based on the first skill subject vector distribution sub-parameter and a first pre-set subject distribution hypothesis sub-parameter; and
    a fourth parameter calculating sub-module (3032) for calculating a fourth skill subject vector distribution parameter based on the second skill subject vector distribution sub-parameter and a second pre-set subject distribution hypothesis sub-parameter;
    a first subject vector sampling sub-module (3033) for obtaining a first skill subject vector based on the third skill subject vector distribution parameter and the fourth skill subject vector distribution parameter;
    a second fully connected network submodule (3034) for extracting the first subject feature vector from the first skill subject vector; and
    a first subject distribution feature (3035) calculating sub-module for obtaining a target skill subject distribution vector based on the extracted first subject feature vector and a first activation function.
  7. The method of any one of claims 3 to 6, wherein the description text generating sub-model (220) comprises:
    an encoder module (401) for generating a sequence of semantic characterization vectors of a current sentence in the original text;
    an attention module (402) for performing a weighted transformation on the generated sequence based on the target skill subject distribution vector; and
    a decoder module (403) for predicting a skill subject label of the current sentence based on the weighted and transformed sequence, and predicting a current word of the target recruitment position description text based on the predicted skill subject label, wherein the encoder module (401) preferably comprises:
    a word vector generating sub-module (4011) for generating a word vector of respective word included in the current sentence in the original text; and
    a first circular neural network sub-module (4012) for generating the sequence of semantic characterization vectors of the current sentence based on respective word vector.
  8. The method of claim 7, wherein the attention module (402) comprises a first attention sub-module (4021) and a second attention sub-module (4022), and the decoder module (403) comprises a subject predicting sub-module (4031) and a text generating submodule (4032), wherein
    the first attention sub-module (4021) is configured for performing weighted transformation on the semantic characterization vector sequence, based on the target skill subject distribution vector and a hidden layer feature state vector in the subject predicting sub-module, to obtain a weighted and transformed first vector sequence;
    the second attention sub-module (4022) is configured for performing weighted transformation on the semantic characterization vector sequence based on the target skill subject distribution vector and a hidden feature state vector in the text generating submodule, to obtain a weighted and transformed second vector sequence;
    the subject predicting sub-module (4031) is configured for predicting a skill subject label of the current sentence based on the target skill subject distribution vector and the weighted and transformed first vector sequence; and
    the text generating sub-module (4032) is configured for predicting the current word in the target recruitment position description text based on the skill subject label of the current sentence and the weighted and transformed second vector sequence.
  9. The method of claim 8, wherein the subject predicting sub-module comprises:
    a second circular neural network sub-module (40311) for obtaining a first sequence feature vector, based on a hidden layer feature state vector of a circular neural network predicting a previous sentence in the text generating sub-module, an embedded characterization vector corresponding to a skill subject label of the previous sentence, and the target skill subject distribution vector; and
    a subject generating sub-module (40312) for predicting the skill subject label of the current sentence based on the first sequence feature vector and the first vector sequence, wherein the text generating sub-module preferably comprises:
    a third circular neural network sub-module (40321) for obtaining a second sequence feature vector based on the first sequence feature vector and the predicted word embedding characterization vector of the previous word;
    an intermediate processing sub-module (40322) for obtaining a pre-generated word probability vector based on the second vector sequence and the second sequence feature vector; and
    a copy mechanism sub-module (40323) for processing the pre-generated word probability vector based on the first skill subject word distribution parameter, to obtain the current word in the predicted target recruitment position description text.
  10. The method of claim 4, wherein the training process of the deep neural network model comprises:
    obtaining (S510) a first training sample data, wherein the first training sample data comprises a first sample-related text of a first sample position and a first standard recruitment position description text corresponding to the first sample position;
    using (S510) the obtained first training sample data to preliminarily train the pre-constructed text subject predicting sub-model (210) to obtain a preliminary trained text subject predicting sub-model (210);
    obtaining (S520) a second training sample data, wherein the second training sample data comprises a second sample related text of a second sample position and a second standard recruitment position description text corresponding to the second sample position; and
    using (S530) the obtained second training sample data to train the deep neural network model including a preliminary trained text subject predicting sub-model (210) and a pre-constructed description text generating sub-model (220), to obtain a trained deep neural network model.
  11. The method of claim 10, wherein the text subject predicting sub-model (210) further comprises:
    a second subject distribution determining module (6041) for obtaining an original skill subject distribution vector based on the skill subject distribution parameter;
    a first text reconstruction sub-module (605) for generating predicted bag-of-word characterization data of the reconstructed original text related to the target position based on the second skill subject word distribution parameter and the original skill subject distribution vector; and
    a second text reconstruction sub-module (606) for generating a predictive bag-of-word characterization data of the reconstructed standard recruitment position description text based on the first skill subject word distribution parameter and the target skill subject distribution vector, wherein using the obtained first training sample data to preliminarily train the pre-constructed text subject predicting sub-model (210) preferably comprises:
    inputting the first sample-related text into the pre-constructed text subject predicting sub-model (210);
    calculating a first loss function value based on a first disparity information and a second disparity information by using a neural variation method, wherein the first disparity information is disparity information between the first predictive bag-of-word characterization data output by the first text reconstruction sub-module and the bag-of-word characterization data of the text related to the first sample output by the bag-of-word feature extracting module, and the second disparity information is disparity information between the second predictive bag-of-word characterization data output by the second text reconstruction sub-module and the bag-of-word characterization data of the first standard recruitment position description text; and
    adjusting the network parameters in the pre-constructed text subject predicting sub-model (210) based on the calculated first loss function value until a threshold of the number of iterations is reached or the loss function value converges.
  12. The method of claim 11, wherein using the second training sample data to train the deep neural network model including a preliminary trained text subject predicting sub-model (210) and a pre-constructed description text generating sub-model (220) to obtain a trained deep neural network model comprises:
    inputting a second sample-related text into the deep neural network model including a pre-trained text subject predicting sub-model (210) and a pre-constructed description text generating sub-model (220);
    calculating a second loss function value based on the third disparity information and the fourth disparity information, wherein the third disparity information is the disparity between the first predictive bag-of-word characterization data output by the first text reconstruction sub-module and the bag-of-word characterization data of the second sample related text output by the bag-of-word feature extracting module, and the fourth disparity information is disparity information between the second predictive bag-of-word characterization data output by the second text reconstruction sub-module and the bag-of-word characterization data of the second standard recruitment position description text;
    calculating a third loss function value based on the fifth disparity information, wherein the fifth disparity information is disparity information between the second standard recruitment position description text and the text output by the description text generating sub-model (220);
    determining an overall loss function value based on the calculated second loss function value and the third loss function value, and
    adjusting network parameters in the text subject predicting sub-model (210) and the description text generating sub-model (220) based on the overall loss function value until a threshold of the number of iterations is reached or the overall loss function value converges.
  13. An apparatus for generating a recruitment position description text, comprising:
    modules for performing the method according to any one of claims 1 to 12.
  14. An electronic device, comprising:
    at least one processor; and
    a memory in communication connection with the at least one processor; wherein,
    the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to cause the at least one processor to perform the method for generating the recruitment post description text according to any one of claims 1-12.
  15. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of generating a recruitment position site description text according to any one of claims 1-12.
EP21165688.9A 2020-05-08 2021-03-29 Method, apparatus, device and medium for generating recruitment position description text Pending EP3859588A3 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010381686.9A CN113627135B (en) 2020-05-08 2020-05-08 Recruitment post description text generation method, device, equipment and medium

Publications (2)

Publication Number Publication Date
EP3859588A2 true EP3859588A2 (en) 2021-08-04
EP3859588A3 EP3859588A3 (en) 2021-10-20

Family

ID=75277915

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21165688.9A Pending EP3859588A3 (en) 2020-05-08 2021-03-29 Method, apparatus, device and medium for generating recruitment position description text

Country Status (5)

Country Link
US (1) US20210216726A1 (en)
EP (1) EP3859588A3 (en)
JP (1) JP2021177375A (en)
KR (1) KR102540185B1 (en)
CN (1) CN113627135B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220156635A1 (en) * 2020-11-19 2022-05-19 Sap Se Machine Learning Prediction For Recruiting Posting
US20230008868A1 (en) * 2021-07-08 2023-01-12 Nippon Telegraph And Telephone Corporation User authentication device, user authentication method, and user authentication computer program
CN114492393A (en) * 2022-01-17 2022-05-13 北京百度网讯科技有限公司 Text theme determination method and device and electronic equipment
CN114997829A (en) * 2022-06-09 2022-09-02 北京联众鼎盛咨询有限公司 Recruitment service system and method
CN115062220B (en) * 2022-06-16 2023-06-23 成都集致生活科技有限公司 Attention merging-based recruitment recommendation system
CN115630651B (en) * 2022-10-24 2023-06-02 北京百度网讯科技有限公司 Text generation method and training method and device of text generation model
JP7329159B1 (en) * 2023-02-20 2023-08-17 株式会社ビズリーチ Information processing system, information processing method and program
CN117689354B (en) * 2024-02-04 2024-04-19 芯知科技(江苏)有限公司 Intelligent processing method and platform for recruitment information based on cloud service

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005346171A (en) * 2004-05-31 2005-12-15 Recruit Co Ltd Recruitment information provision system
US8843433B2 (en) * 2011-03-29 2014-09-23 Manyworlds, Inc. Integrated search and adaptive discovery system and method
KR102363369B1 (en) * 2014-01-31 2022-02-15 구글 엘엘씨 Generating vector representations of documents
KR101842362B1 (en) * 2016-09-01 2018-03-26 성균관대학교산학협력단 An apparatus for generating paragraph based on artificial neural network and method thereof
US11113732B2 (en) * 2016-09-26 2021-09-07 Microsoft Technology Licensing, Llc Controlling use of negative features in a matching operation
US10909442B1 (en) * 2017-03-30 2021-02-02 Amazon Technologies, Inc. Neural network-based artificial intelligence system for content-based recommendations using multi-perspective learned descriptors
US10733380B2 (en) * 2017-05-15 2020-08-04 Thomson Reuters Enterprise Center Gmbh Neural paraphrase generator
CA3144657C (en) * 2017-05-23 2023-10-10 Google Llc Attention-based sequence transduction neural networks
CN107273503B (en) * 2017-06-19 2020-07-10 北京百度网讯科技有限公司 Method and device for generating parallel text in same language
WO2018236761A1 (en) * 2017-06-19 2018-12-27 Vettd, Inc. Systems and methods to determine and utilize semantic relatedness between multiple natural language sources to determine strengths and weaknesses
CA3015240A1 (en) * 2017-08-25 2019-02-25 Royal Bank Of Canada Service management control platform
CN111919230A (en) * 2017-10-02 2020-11-10 刘伟 Machine learning system for job applicant resume ranking
CN107967592A (en) * 2017-10-12 2018-04-27 如是科技(大连)有限公司 The aid in treatment method and device of job notice
CN108280061B (en) * 2018-01-17 2021-10-26 北京百度网讯科技有限公司 Text processing method and device based on ambiguous entity words
US10437936B2 (en) * 2018-02-01 2019-10-08 Jungle Disk, L.L.C. Generative text using a personality model
US20190287012A1 (en) * 2018-03-16 2019-09-19 Microsoft Technology Licensing, Llc Encoder-decoder network with intercommunicating encoder agents
US11042712B2 (en) * 2018-06-05 2021-06-22 Koninklijke Philips N.V. Simplifying and/or paraphrasing complex textual content by jointly learning semantic alignment and simplicity
US20200193382A1 (en) * 2018-12-17 2020-06-18 Robert P. Michaels Employment resource system, method and apparatus
US20200210491A1 (en) * 2018-12-31 2020-07-02 Charles University Faculty of Mathematics and Physics Computer-Implemented Method of Domain-Specific Full-Text Document Search
CN109886641A (en) * 2019-01-24 2019-06-14 平安科技(深圳)有限公司 A kind of post portrait setting method, post portrait setting device and terminal device
CN109918483B (en) * 2019-03-15 2021-07-16 智者四海(北京)技术有限公司 Device and method for matching recruitment position and job hunting resume
CN110032681B (en) * 2019-04-17 2022-03-15 北京网聘咨询有限公司 Resume content-based job recommendation method
CN110782072A (en) * 2019-09-29 2020-02-11 广州荔支网络技术有限公司 Employee leave risk prediction method, device, equipment and readable storage medium
US11373146B1 (en) * 2021-06-30 2022-06-28 Skyhive Technologies Inc. Job description generation based on machine learning

Also Published As

Publication number Publication date
EP3859588A3 (en) 2021-10-20
JP2021177375A (en) 2021-11-11
KR102540185B1 (en) 2023-06-07
US20210216726A1 (en) 2021-07-15
CN113627135B (en) 2023-09-29
CN113627135A (en) 2021-11-09
KR20210047284A (en) 2021-04-29

Similar Documents

Publication Publication Date Title
EP3859588A2 (en) Method, apparatus, device and medium for generating recruitment position description text
EP3885935A1 (en) Image questioning and answering method, apparatus, device and storage medium
CN112270379A (en) Training method of classification model, sample classification method, device and equipment
EP3852000A1 (en) Method and apparatus for processing semantic description of text entity, device and storage medium
CN113220836B (en) Training method and device for sequence annotation model, electronic equipment and storage medium
CN111325020A (en) Event argument extraction method and device and electronic equipment
US11899699B2 (en) Keyword generating method, apparatus, device and storage medium
US20220358292A1 (en) Method and apparatus for recognizing entity, electronic device and storage medium
US11947578B2 (en) Method for retrieving multi-turn dialogue, storage medium, and electronic device
KR20210038430A (en) Expression learning method and device based on natural language and knowledge graph
CN113722493A (en) Data processing method, device, storage medium and program product for text classification
CN112506949B (en) Method, device and storage medium for generating structured query language query statement
KR20210122204A (en) Method and apparatus for predicting emotion style of dialogue, electronic device, storage medium, and computer program product
CN115688920A (en) Knowledge extraction method, model training method, device, equipment and medium
CN112507103A (en) Task type dialogue and model training method, device, equipment and storage medium
CN111611808A (en) Method and apparatus for generating natural language model
CN112270169B (en) Method and device for predicting dialogue roles, electronic equipment and storage medium
CN111311000B (en) User consumption behavior prediction model training method, device, equipment and storage medium
Sonawane et al. ChatBot for college website
EP4109443A2 (en) Method for correcting text, method for generating text correction model, device and medium
Zhu et al. SIM: A slot-independent neural model for dialogue state tracking
US20220108174A1 (en) Training neural networks using auxiliary task update decomposition
CN114491030A (en) Skill label extraction and candidate phrase classification model training method and device
Ali et al. Intelligent agents in educational institutions: NEdBOT-NLP-based chatbot for administrative support using DialogFlow
Ye et al. A natural language-based flight searching system

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20210329

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

RIC1 Information provided on ipc code assigned before grant

Ipc: G06N 3/08 20060101ALI20210914BHEP

Ipc: G06N 3/04 20060101ALI20210914BHEP

Ipc: G06F 40/56 20200101AFI20210914BHEP