CN114330333A - Method for processing skill information, model training method and device - Google Patents

Method for processing skill information, model training method and device Download PDF

Info

Publication number
CN114330333A
CN114330333A CN202111644033.6A CN202111644033A CN114330333A CN 114330333 A CN114330333 A CN 114330333A CN 202111644033 A CN202111644033 A CN 202111644033A CN 114330333 A CN114330333 A CN 114330333A
Authority
CN
China
Prior art keywords
information
skill
sample
examined
word
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111644033.6A
Other languages
Chinese (zh)
Inventor
苏昱涵
秦川
申大忠
赵洪科
宋欣
祝恒书
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111644033.6A priority Critical patent/CN114330333A/en
Publication of CN114330333A publication Critical patent/CN114330333A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure provides a skill information processing method, a model training method and a device, and relates to the technical field of artificial intelligence, in particular to the technical field of machine learning. The specific implementation scheme is as follows: acquiring information to be examined; determining at least one skill word based on the information to be examined and a skill word generation model trained in advance; and outputting at least one skill word. The realization mode can inspect the skill information based on the skill words, and can improve the accuracy of skill inspection.

Description

Method for processing skill information, model training method and device
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to the technical field of machine learning.
Background
Currently, for different positions, skill information corresponding to each position needs to be examined so as to select candidates adapted to each position.
However, the current skill information examination depends on manual experience, so that the skill examination is subjective and has poor accuracy.
Disclosure of Invention
The disclosure provides a skill information processing method, a model training method and a device.
According to an aspect of the present disclosure, there is provided a method for processing skill information, including: acquiring information to be examined; determining at least one skill word based on the information to be examined and a skill word generation model trained in advance; and outputting at least one skill word.
According to another aspect of the present disclosure, there is provided a model training method, including: obtaining sample to-be-examined information and investigation result marking data; determining at least one sample skill word based on a preset skill word graph, sample to-be-examined information and a to-be-trained model; and training the model to be trained based on at least one sample skill word and the survey result marking data to obtain a trained skill word generation model.
According to another aspect of the present disclosure, there is provided an apparatus for processing skill information, including: an information acquisition unit configured to acquire information to be examined; the skill word determination unit is configured to determine at least one skill word based on the information to be examined and a pre-trained skill word generation model; a skill word output unit configured to output at least one skill word.
According to another aspect of the present disclosure, there is provided a model training apparatus including: the sample acquisition unit is configured to acquire sample to-be-inspected information and inspection result marking data; the sample skill word determining unit is configured to determine at least one sample skill word based on a preset skill word graph, sample to-be-examined information and a to-be-trained model; and the model training unit is configured to train the model to be trained on the basis of at least one sample skill word and the investigation result marking data to obtain a trained skill word generation model.
According to another aspect of the present disclosure, there is provided an electronic device including: one or more processors; a memory for storing one or more programs; when executed by one or more processors, cause the one or more processors to implement a method for processing skill information or a model training method as any one of above.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform a method for processing skill information or a model training method as any one of the above.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method for processing skill information or a model training method as any of the above.
According to the technology disclosed by the invention, the skill information processing method is provided, the skill information can be investigated based on the skill words, and the skill investigation accuracy can be improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for processing skill information according to the present disclosure;
FIG. 3 is a schematic diagram of one application scenario for a method for processing skill information according to the present disclosure;
FIG. 4 is a flow diagram of one embodiment of a model training method according to the present disclosure;
FIG. 5 is a flow diagram of another embodiment of a model training method according to the present disclosure;
FIG. 6 is a schematic block diagram of one embodiment of an apparatus for processing skill information according to the present disclosure;
FIG. 7 is a schematic block diagram of one embodiment of a model training apparatus according to the present disclosure;
FIG. 8 is a block diagram of an electronic device for implementing a method for processing skill information or a model training method of embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The terminal devices 101, 102, 103 interact with a server 105 via a network 104 to receive or send messages or the like. The terminal devices 101, 102, and 103 may obtain object information and position information waiting investigation information that need to be investigated, and send the information to be investigated to the server 105 through the network 104, so that the server 105 returns at least one corresponding skill word and outputs at least one skill word, so that the user performs skill investigation based on the at least one skill word. In the model training phase, the terminal devices 101, 102, and 103 may obtain sample information to be examined and survey result tagging data, and send the sample information to be examined and survey result tagging data to the server 105 through the network 104, so that the server 105 performs model training based on the data, and returns a trained skill word generation model.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices including, but not limited to, a mobile phone, a computer, a tablet, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server providing various services, for example, the server 105 may obtain information to be checked sent by the terminal devices 101, 102, 103, determine at least one tagged word corresponding to the information to be checked based on a skill word generation model trained in advance, and return the at least one tagged word to the terminal devices 101, 102, 103 through the network 104. Or, in the model training phase, the server 105 may further receive the information to be examined of the sample and the labeled data of the examination result, which are sent by the terminal devices 101, 102, 103, and train the model to be trained based on the preset skill word graph, the information to be examined of the sample and the labeled data of the examination result, so as to obtain the trained skill word generation model.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 105 is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the method for processing skill information or the model training method provided in the embodiment of the present disclosure may be executed by the terminal devices 101, 102, and 103, or may be executed by the server 105, and the apparatus for processing skill information or the model training apparatus may be provided in the terminal devices 101, 102, and 103, or may be provided in the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for processing skill information in accordance with the present disclosure is shown. The method for processing skill information of the embodiment comprises the following steps:
step 201, information to be examined is obtained.
In the present embodiment, an executing subject (such as the terminal devices 101, 102, 103 or the server 105 in fig. 1) may acquire the information to be examined related to the skill examination in the case that the skill examination is required. The information to be examined may include, but is not limited to, position information to be examined, object information to be examined, historical investigation result information of the object to be examined, and the like, which is not limited in this embodiment. Moreover, the execution subject may acquire the information to be checked from the pre-stored local data, or may acquire the information to be checked based on the electronic device with which the connection is established in advance.
In some optional implementations of the embodiment, the information to be examined includes information on a position to be examined and information on an object to be examined. The information of the positions to be examined is information of the positions to be examined, such as position requirement information. The information of the object to be investigated is information of an application object needing to be investigated, such as resume information of the application object.
Step 202, generating a model based on the information to be examined and the pre-trained skill words, and determining at least one skill word.
In this embodiment, the pre-trained skill word generation model is used to determine at least one skill word matching the information to be investigated, so that the recruitment object can perform skill investigation on the application object based on the at least one skill word.
The pre-trained skill word generation model can be obtained based on historical investigation data and a skill word graph corresponding to the historical investigation data, and the historical investigation data can include historical investigation information and investigation results corresponding to the historical investigation information.
And step 203, outputting at least one skill word.
In this embodiment, the executive body can output at least one skill word, so that the recruitment object performs recruitment investigation on the application object according to the at least one skill word. Optionally, the executive body can output at least one skill word to the binding electronic device of the recruitment object.
In some optional implementations of this embodiment, outputting at least one skill word includes: determining a target recruitment object based on the at least one skill word and the recruitment object information; and sending at least one skill word to the target recruitment object so that the target recruitment object performs skill investigation on the object to be investigated based on the at least one skill word.
In this implementation, the execution subject may acquire the recruitment object information after determining that the at least one skill word is obtained. The recruitment object information may be object information corresponding to each recruitment object. Optionally, the object information may include an object tag. The execution main body can match at least one skill word with the object information of each recruitment object to obtain the matching degree of each recruitment object and at least one skill word, and the recruitment object with the highest matching degree is selected as the target recruitment object. And sending the at least one skill word to the binding electronic equipment of the target recruitment object so that the target recruitment object performs skill investigation on the object to be investigated based on the at least one skill word.
And for each skill word, the execution main body can determine a target recruitment object with the highest matching degree with the skill word based on the skill word and the recruitment object information, and send the skill word to the target recruitment object so that the target recruitment object can investigate the skill corresponding to the skill word of the object to be investigated. That is, the execution subject can select different recruitment objects for recruitment investigation aiming at different skill words, so that the investigation effect of the multi-skill words is improved.
With continued reference to fig. 3, a schematic diagram of one application scenario of a method for processing skill information in accordance with the present disclosure is shown. In the application scenario of fig. 3, the execution subject may obtain information to be examined 301, where the information to be examined 301 includes information on a position to be examined and information on an object to be examined. The executing body inputs the information to be examined 301 into the skill word generation model 302, and obtains the skill words 303 output by the skill word generation model 302, wherein the skill words 303 may include, but are not limited to, skill words a, skill words B, and skill words C. Thereafter, the executive body can send the skills word 303 to the targeted recruitment object 304. The recruitment target object 304 may be one object or a plurality of objects, which is not limited in this embodiment.
According to the method for processing skill information provided by the embodiment of the disclosure, the skill word corresponding to the information to be examined can be determined based on the skill word generation model trained in advance, the examination of the skill information is performed based on the skill word, and the accuracy of the skill examination can be improved.
With continued reference to FIG. 4, a flow 400 of one embodiment of a model training method according to the present disclosure is shown. As shown in fig. 4, the model training method of the present embodiment may include the following steps:
step 401, obtaining sample information to be examined and investigation result marking data.
In this embodiment, the sample information to be examined may be position information to be examined and object information to be examined in the historical investigation data, and the investigation result labeling data may be investigation result information, such as historical interview result information, in the historical investigation data, which matches the position information to be examined and the object information to be examined.
Step 402, determining at least one sample skill word based on a preset skill word graph, sample to-be-examined information and a to-be-trained model.
In this embodiment, the preset skill word graph includes a plurality of skill words and an association relationship between the skill words. The execution subject can determine at least one sample skill word output by the model to be trained based on the preset skill word graph and the sample to-be-examined information.
The execution subject can input the preset skill word graph and the sample to-be-examined information into the to-be-trained model, so that the to-be-trained model determines at least one sample skill word matched with the sample to-be-examined information from the preset skill word graph.
And 403, training the model to be trained based on at least one sample skill word and the survey result marking data to obtain a trained skill word generation model.
In this embodiment, the executing entity may adjust the model parameters of the model to be trained in response to determining that the matching degree is low based on the matching degree between the investigation result tagging data and the at least one sample skill word, and repeat the iteration until the matching degree between the at least one sample word output by the model to be trained and the investigation result tagging data is high and meets a preset convergence condition, so as to obtain the skill word generation model after training.
The trained skill word generation model herein can be applied to the method for processing skill information described in fig. 1, and can determine matching skill words based on the information to be examined.
The model training method provided by the above embodiment of the present disclosure may further utilize a preset skill word graph, and sample to-be-examined information and examination result tagging data in the historical examination data to train the model to be trained, obtain a trained skill word generation model, and implement generation of a skill word matching the to-be-examined information.
With continued reference to FIG. 5, a flow 500 of another embodiment of a model training method according to the present disclosure is illustrated. As shown in fig. 5, the model training method of the present embodiment may include the following steps:
and step 501, obtaining sample information to be examined and investigation result marking data.
In this embodiment, the sample to-be-examined information includes sample information of a to-be-examined position and sample information of an to-be-examined object. The position sample information to be examined is position information for historical investigation, such as historical position requirement information. The sample information of the object to be examined is information of an application object for historical investigation, such as resume information of a historical application object. The survey result marking data can be interview result information of historical survey, corresponding to the corresponding post information and the corresponding object information of the historical survey.
For a detailed description of step 501, refer to the detailed description of step 401, which is not repeated herein.
And 502, determining a skill word graph based on the information to be inspected and the inspection result marking data.
In this embodiment, the execution subject may extract each skill word from the sample information to be examined and the survey result tagging data, and establish an association relationship between each skill word based on the occurrence position of each skill word in the sample information to be examined and the survey result tagging data, so as to obtain a skill word graph.
In some optional implementations of this embodiment, determining the skill word graph based on the sample to-be-examined information includes: determining each candidate skill word from the information to be inspected and the inspection result marking data of the sample; determining connection information among the candidate skill words based on the candidate skill words, the sample information to be inspected and the inspection result marking data; and determining a skill word graph based on each candidate skill word and the connection information between the candidate skill words.
In this implementation manner, the execution subject may perform text analysis on the sample survey information and the survey result tagging data, and extract each candidate skill word therefrom. Optionally, the execution subject may extract the candidate skill word from the sample survey information and the survey result labeling data by combining a recurrent neural network, an attention mechanism and a replication mechanism. The duplication mechanism is used for directly duplicating the words beyond the word list, and the attention mechanism is used for carrying out attention resource scheduling on key texts with more attention. The determination accuracy of the candidate skill words can be improved by combining the recurrent neural network, the attention mechanism and the replication mechanism.
Then, the executive body can determine connection information among the candidate skill words based on the candidate skill words, the sample information to be examined and the examination result marking data. Optionally, the executing body may determine connection information between the candidate skill words based on the positions of the candidate skill words in each piece of information and the association relationship between each piece of information. The information refers to the position information to be examined, the object information to be examined and the survey result marking data of the historical data in the sample information to be examined and the survey result marking data.
In other optional implementation manners of this embodiment, determining connection information between the candidate skill words based on the candidate skill words, the sample information to be reviewed, and the review result tagging data includes: determining at least one group of sample data tuples based on the information to be inspected and the inspection result marking data of the sample; for each group of sample data tuples, determining connection information among candidate skill words in the sample data tuples; and determining the connection information between the candidate skill words based on the connection information between the candidate skill words in each group of sample data tuples.
In this implementation manner, the sample to-be-examined information may include the to-be-examined position information and the to-be-examined object information in the plurality of historical investigation data, and the investigation result labeling data may include the historical interview result information in the plurality of historical investigation data. The execution subject can determine at least one group of sample data tuples from the sample information to be examined and the survey result marking data, and for each group of sample data tuples, the sample data tuples comprise the post information to be examined, the object information to be examined and the historical interview result information which have corresponding relations.
For each group of sample data tuples, determining connection information between candidate skill words in the sample data tuple may include: for each information in each group of sample data tuples, traversing the information by using a preset sliding window, and establishing connection information among candidate skill words in the information based on the skill words in the information, which are positioned in the preset sliding window; and for each piece of information in the same sample data tuple, establishing a connection relation between candidate skill words of the cross-information text for the candidate skill words in each piece of information. And generating a preset skill word graph based on the connection relationship between the candidate skill words in the information in each sample data tuple and the connection relationship between the candidate skill words of the cross-information text. The information includes position information to be examined, object information to be examined and historical interview result information.
Step 503, inputting the preset skill word graph and the sample to-be-examined information into the model to be trained.
In this embodiment, the executing subject may input the skill word graph and the sample to-be-examined information into the model to be trained, so that the model to be trained outputs at least one sample skill word.
And step 504, determining label representation information based on the preset skill word graph.
In this embodiment, the execution subject may obtain the tag representation information by using a preset skill word graph. The label representation information is used for representing each skill word.
And 505, determining topic representation information based on a preset skill word graph and sample to-be-examined information.
In this embodiment, the execution subject may map a preset skill word graph to a preset multidimensional space to obtain mapping data corresponding to the skill word graph. And mapping the sample information to be examined to a preset multidimensional space to obtain mapping data corresponding to the sample information to be examined. And then, the executing subject can integrate the mapping data corresponding to the skill word graph and the mapping data corresponding to the sample to-be-examined information to obtain the topic representation information. The topic representation information is used for representing a preset skill word graph and a text topic corresponding to the sample to-be-examined information.
In some optional implementations of this embodiment, the following steps may also be performed: and carrying out data processing on the mapping data corresponding to the skill word graph and the mapping data corresponding to the sample information to be examined by using the multilayer perceptron to obtain integrated data, and outputting the integrated data for subsequent data analysis processing based on the integrated data.
Step 506, determining at least one sample skill word based on the tag characterization information and the subject characterization information.
In this embodiment, the executing body-protector may determine at least one sample skill word based on matching the tag characterization information and the topic characterization information. The sample skill word may be a skill word in the tag characterization information.
And 507, training the model to be trained based on at least one sample skill word and the survey result marking data to obtain a trained skill word generation model.
In this embodiment, the skill word generation model may be a bayesian hidden variable model, or may be a machine learning model in other prior art, which is not limited in this embodiment. For the detailed description of step 507, refer to the detailed description of step 403, which is not repeated herein.
The model training method provided by the above embodiment of the disclosure may further extract skill words from the sample information to be examined and the examination result tagging data, determine each sample data tuple corresponding to the skill word extracted from the sample information to be examined and the examination result tagging data, determine an association relationship between the skill words based on the association between the inside of each information and each information of each sample data tuple, obtain a skill word graph, and improve the model accuracy by using the skill word graph to assist the model training. And determining the sample skill words by using the topic representation information and the label representation information, so that the determination accuracy of the sample skill words can be improved.
With further reference to fig. 6, as an implementation of the methods shown in the above-mentioned figures, the present disclosure provides an embodiment of an apparatus for processing skill information, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to electronic devices such as a terminal device, a server, and the like.
As shown in fig. 6, the apparatus 600 for processing skill information of the present embodiment includes: an information acquisition unit 601, a skill word determination unit 602, and a skill word output unit 603.
An information acquisition unit 601 configured to acquire information to be reviewed.
A skill word determination unit 602 configured to determine at least one skill word based on the information to be examined and the pre-trained skill word generation model.
A skill word output unit 603 configured to output at least one skill word.
In some optional implementations of the embodiment, the information to be examined includes information on a position to be examined and information on an object to be examined.
In some optional implementations of this embodiment, the skill word output unit 603 is further configured to: determining a target recruitment object based on the at least one skill word and the recruitment object information; and sending at least one skill word to the target recruitment object so that the target recruitment object performs skill investigation on the object to be investigated based on the at least one skill word.
It should be understood that units 601 to 603, which are described in the apparatus 600 for processing skill information, correspond to the respective steps in the method described with reference to fig. 2, respectively. Thus, the operations and features described above for the method for processing skill information are equally applicable to the apparatus 600 and the units contained therein and will not be described in detail here.
With further reference to fig. 7, as an implementation of the methods shown in the above-mentioned figures, the present disclosure provides an embodiment of a model training apparatus, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 4, and the apparatus may be specifically applied to electronic devices such as a terminal device and a server.
As shown in fig. 7, the model training apparatus 700 of the present embodiment includes: a sample acquisition unit 701, a sample skill word determination unit 702, and a model training unit 703.
The state obtaining unit 701 is configured to obtain sample to-be-examined information and examination result labeling data.
A sample skill word determination unit 702 configured to determine at least one sample skill word based on a preset skill word graph, sample to-be-examined information, and a to-be-trained model.
And the model training unit 703 is configured to train the model to be trained based on the at least one sample skill word and the survey result tagging data, so as to obtain a trained skill word generation model.
In some optional implementations of this embodiment, the method further includes: and the skill word graph determining unit is configured to determine a skill word graph based on the sample to-be-examined information and the investigation result marking data.
In some optional implementations of this embodiment, the skill word graph determining unit is further configured to: determining each candidate skill word from the information to be inspected and the inspection result marking data of the sample; determining connection information among the candidate skill words based on the candidate skill words, the sample information to be inspected and the inspection result marking data; and determining a skill word graph based on each candidate skill word and the connection information between the candidate skill words.
In some optional implementations of this embodiment, the skill word graph determining unit is further configured to: determining at least one group of sample data tuples based on the information to be inspected and the inspection result marking data of the sample; for each group of sample data tuples, determining connection information among candidate skill words in the sample data tuples; and determining the connection information between the candidate skill words based on the connection information between the candidate skill words in each group of sample data tuples.
In some optional implementation manners of this embodiment, the sample to-be-examined information includes sample information of the to-be-examined position and sample information of the to-be-examined object.
In some optional implementations of this embodiment, the sample skill word determination unit 702 is further configured to: inputting preset skill word diagrams and sample to-be-examined information into a model to be trained; determining label representation information based on a preset skill word graph; determining topic representation information based on a preset skill word graph and sample to-be-examined information; and determining at least one sample skill word based on the tag characterization information and the subject characterization information.
It should be understood that the units 701 to 703 recited in the model training apparatus 700 correspond to the respective steps in the method described with reference to fig. 4. Thus, the operations and features described above with respect to the model training method are equally applicable to the apparatus 700 and the units included therein, and are not described in detail here.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 8 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 801 performs the respective methods and processes described above, such as a method for processing skill information or a model training method. For example, in some embodiments, the method for processing skill information or the model training method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 808. In some embodiments, part or all of the computer program can be loaded and/or installed onto device 800 via ROM 802 and/or communications unit 809. When loaded into RAM803 and executed by computing unit 801, a computer program may perform one or more steps of the method for processing skill information or the method for model training described above. Alternatively, in other embodiments, the computing unit 801 may be configured by any other suitable means (e.g., by means of firmware) to perform a method for processing skill information or a model training method.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (21)

1. A method for processing skill information, comprising:
acquiring information to be examined;
determining at least one skill word based on the information to be examined and a skill word generation model trained in advance;
outputting the at least one skill word.
2. The method of claim 1, wherein the information to be investigated comprises position information to be investigated and object information to be investigated.
3. The method of claim 1, wherein the outputting the at least one skill word comprises:
determining a target recruitment object based on the at least one skill word and the recruitment object information;
and sending the at least one skill word to the target recruitment object so that the target recruitment object performs skill investigation on the object to be investigated based on the at least one skill word.
4. A model training method, comprising:
obtaining sample to-be-examined information and investigation result marking data;
determining at least one sample skill word based on a preset skill word graph, the sample to-be-examined information and a to-be-trained model;
and training the model to be trained based on the at least one sample skill word and the survey result labeling data to obtain a trained skill word generation model.
5. The method of claim 4, further comprising:
and determining the skill word graph based on the sample to-be-examined information and the investigation result labeling data.
6. The method of claim 5, wherein the determining the skill word graph based on the sample to-be-examined information comprises:
determining each candidate skill word from the sample to-be-examined information and the investigation result labeling data;
determining connection information among the candidate skill words based on the candidate skill words, the sample to-be-examined information and the investigation result labeling data;
determining the skill word graph based on the candidate skill words and connection information between the candidate skill words.
7. The method of claim 6, wherein the determining connection information between the candidate skill words based on the candidate skill words, the sample review information, and the survey result annotation data comprises:
determining at least one group of sample data tuples based on the sample information to be examined and the survey result marking data;
for each group of sample data tuples, determining connection information among candidate skill words in the sample data tuples;
and determining the connection information among the candidate skill words based on the connection information among the candidate skill words in each group of sample data tuples.
8. The method according to claim 4, wherein the sample information to be examined comprises position sample information to be examined and object sample information to be examined.
9. The method of claim 4, wherein the determining at least one sample skill word based on a preset skill word graph, the sample scout information, and a model to be trained comprises:
inputting the preset skill word graph and the sample to-be-examined information into the model to be trained;
determining label representation information based on the preset skill word graph;
determining topic representation information based on the preset skill word graph and the sample to-be-examined information;
determining the at least one sample skill word based on the tag characterization information and the topic characterization information.
10. An apparatus for processing skill information, comprising:
an information acquisition unit configured to acquire information to be examined;
a skill word determination unit configured to determine at least one skill word based on the information to be examined and a pre-trained skill word generation model;
a skill word output unit configured to output the at least one skill word.
11. The apparatus of claim 10, wherein the information to be investigated comprises position information to be investigated and object information to be investigated.
12. The apparatus of claim 10, wherein the skill word output unit is further configured to:
determining a target recruitment object based on the at least one skill word and the recruitment object information;
and sending the at least one skill word to the target recruitment object so that the target recruitment object performs skill investigation on the object to be investigated based on the at least one skill word.
13. A model training apparatus comprising:
the sample acquisition unit is configured to acquire sample to-be-inspected information and inspection result marking data;
the sample skill word determining unit is configured to determine at least one sample skill word based on a preset skill word graph, the sample to-be-examined information and a to-be-trained model;
and the model training unit is configured to train the model to be trained on the basis of the at least one sample skill word and the investigation result marking data to obtain a trained skill word generation model.
14. The apparatus of claim 13, further comprising:
and the skill word graph determining unit is configured to determine the skill word graph based on the sample to-be-examined information and the investigation result labeling data.
15. The apparatus of claim 14, wherein the skill word graph determination unit is further configured to:
determining each candidate skill word from the sample to-be-examined information and the investigation result labeling data;
determining connection information among the candidate skill words based on the candidate skill words, the sample to-be-examined information and the investigation result labeling data;
determining the skill word graph based on the candidate skill words and connection information between the candidate skill words.
16. The apparatus of claim 15, wherein the skill word graph determination unit is further configured to:
determining at least one group of sample data tuples based on the sample information to be examined and the survey result marking data;
for each group of sample data tuples, determining connection information among candidate skill words in the sample data tuples;
and determining the connection information among the candidate skill words based on the connection information among the candidate skill words in each group of sample data tuples.
17. The apparatus of claim 13, wherein the sample to-be-examined information includes position to-be-examined sample information and object to-be-examined sample information.
18. The apparatus of claim 13, wherein the sample skill word determination unit is further configured to:
inputting the preset skill word graph and the sample to-be-examined information into the model to be trained;
determining label representation information based on the preset skill word graph;
determining topic representation information based on the preset skill word graph and the sample to-be-examined information;
determining the at least one sample skill word based on the tag characterization information and the topic characterization information.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
20. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-9.
21. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-9.
CN202111644033.6A 2021-12-29 2021-12-29 Method for processing skill information, model training method and device Pending CN114330333A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111644033.6A CN114330333A (en) 2021-12-29 2021-12-29 Method for processing skill information, model training method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111644033.6A CN114330333A (en) 2021-12-29 2021-12-29 Method for processing skill information, model training method and device

Publications (1)

Publication Number Publication Date
CN114330333A true CN114330333A (en) 2022-04-12

Family

ID=81016107

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111644033.6A Pending CN114330333A (en) 2021-12-29 2021-12-29 Method for processing skill information, model training method and device

Country Status (1)

Country Link
CN (1) CN114330333A (en)

Similar Documents

Publication Publication Date Title
CN113722493B (en) Text classification data processing method, apparatus and storage medium
CN113590776B (en) Knowledge graph-based text processing method and device, electronic equipment and medium
CN108228567B (en) Method and device for extracting short names of organizations
US20220358292A1 (en) Method and apparatus for recognizing entity, electronic device and storage medium
CN114428677B (en) Task processing method, processing device, electronic equipment and storage medium
CN113836314B (en) Knowledge graph construction method, device, equipment and storage medium
CN113657483A (en) Model training method, target detection method, device, equipment and storage medium
CN112270168A (en) Dialogue emotion style prediction method and device, electronic equipment and storage medium
CN112580666A (en) Image feature extraction method, training method, device, electronic equipment and medium
CN112989797B (en) Model training and text expansion methods, devices, equipment and storage medium
CN112270169B (en) Method and device for predicting dialogue roles, electronic equipment and storage medium
CN115840867A (en) Generation method and device of mathematical problem solving model, electronic equipment and storage medium
CN113641724A (en) Knowledge tag mining method and device, electronic equipment and storage medium
CN114141236B (en) Language model updating method and device, electronic equipment and storage medium
CN114461665B (en) Method, apparatus and computer program product for generating a statement transformation model
CN113360346B (en) Method and device for training model
CN113239273B (en) Method, apparatus, device and storage medium for generating text
CN113886543A (en) Method, apparatus, medium, and program product for generating an intent recognition model
CN114781386A (en) Method and device for acquiring text error correction training corpus and electronic equipment
CN114330333A (en) Method for processing skill information, model training method and device
CN113850072A (en) Text emotion analysis method, emotion analysis model training method, device, equipment and medium
CN113313049A (en) Method, device, equipment, storage medium and computer program product for determining hyper-parameters
CN113704314A (en) Data analysis method and device, electronic equipment and storage medium
CN113806541A (en) Emotion classification method and emotion classification model training method and device
CN113361621A (en) Method and apparatus for training a model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination