CN111695352A - Grading method and device based on semantic analysis, terminal equipment and storage medium - Google Patents

Grading method and device based on semantic analysis, terminal equipment and storage medium Download PDF

Info

Publication number
CN111695352A
CN111695352A CN202010469517.0A CN202010469517A CN111695352A CN 111695352 A CN111695352 A CN 111695352A CN 202010469517 A CN202010469517 A CN 202010469517A CN 111695352 A CN111695352 A CN 111695352A
Authority
CN
China
Prior art keywords
neural network
network model
matrix
text information
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010469517.0A
Other languages
Chinese (zh)
Inventor
邓悦
郑立颖
徐亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202010469517.0A priority Critical patent/CN111695352A/en
Publication of CN111695352A publication Critical patent/CN111695352A/en
Priority to PCT/CN2020/119299 priority patent/WO2021114840A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/105Human resources
    • G06Q10/1053Employment or hiring

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Mathematical Physics (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Databases & Information Systems (AREA)
  • Machine Translation (AREA)

Abstract

The application is applicable to the technical field of computers, and provides a scoring method, a scoring device, terminal equipment and a storage medium based on semantic analysis, wherein the method comprises the following steps: acquiring voice information of a target user, and converting the voice information into text information; inputting the text information into a trained first neural network model, and performing semantic analysis on the text information to obtain an output text classification result of the first neural network model; and the text classification result comprises a grading label corresponding to the text information, and the interview grading result of the target user is calculated according to the grading label. By the method and the device, the problems that the interview cost is increased due to the fact that the precision reasoning speed of the language model is low, the interview dimension judgment accuracy is low, and the interview efficiency is low are solved.

Description

Grading method and device based on semantic analysis, terminal equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a scoring method and apparatus based on semantic analysis, a terminal device, and a storage medium.
Background
Along with the enlargement of the scale of the enterprise, the number of recruiters is increased; and aiming at the condition of large recruitment quantity, the ability scoring can be carried out through intelligent interview. And in the intelligent interview ability scoring scene, scoring each dimension ability point according to the answer of the user.
However, the existing language model has large parameters, and the memory of the terminal processor is difficult to support, so that the training and reasoning speed of the language model is very slow, the precision of the language model is difficult to judge, the interview cost is increased, the accuracy of judgment of each dimensionality capability is reduced, and the intelligent interview efficiency is directly influenced.
Disclosure of Invention
The embodiment of the application provides a scoring method and device based on semantic analysis, terminal equipment and a storage medium, and can solve the problems that the language model precision reasoning speed is low, the interview cost is increased, the interview dimension judgment accuracy is low, and the interview efficiency is low.
In a first aspect, an embodiment of the present application provides an application resource updating method, including:
acquiring voice information of a target user, and converting the voice information into text information;
inputting the text information into a trained first neural network model, and performing semantic analysis on the text information to obtain an output text classification result of the first neural network model; the text classification result comprises a scoring label corresponding to the text information, the first neural network model is obtained by training based on a training sample set and a second neural network model, the second neural network model is obtained by training based on the training sample set and an output result of the first neural network model, the output result of the first neural network model is obtained by taking the training sample set as input, and the training sample set comprises a plurality of interview corpus texts;
and calculating an interview scoring result of the target user according to the scoring label.
According to the scoring method based on semantic analysis, the trained language model is adopted, conversation content of a target user in an interviewing process is collected, semantic analysis is conducted on the conversation content, text classification is conducted on the conversation content based on a semantic analysis result, scoring of the conversation content in a corresponding text category is calculated, rapid and accurate scoring of each dimension capability point of the target user according to the answer of the target user in an intelligent interviewing scene is achieved, and interviewing efficiency and interviewing scoring accuracy are improved.
In a possible implementation manner of the first aspect, acquiring voice information of a target user, and converting the voice information into text information includes:
recognizing the voice information through a voice recognition algorithm, and extracting acoustic features in the voice information;
and converting the voice information into text information according to the acoustic characteristics.
Illustratively, the corresponding relation between the text information and the current conversation theme is established, so that more accurate and reliable basis is provided for the subsequent classification of the text information, and the interviewer is scored more accurately according to the voice information in the intelligent interviewing process.
In a possible implementation manner of the first aspect, before the inputting the text information into the trained first neural network model, the method includes:
dividing the text information according to the number of preset participles to obtain at least one short sentence text which accords with the number of the preset participles;
or setting the number of the longest short sentences in the process of converting the voice information into the text information, dividing the voice information into at least one voice short sentence with the number less than or equal to the number of the longest short sentences, and converting the at least one voice short sentence into the text information.
In a possible implementation manner of the first aspect, before the inputting the text information into the trained first neural network model, the method includes:
acquiring a training sample set, wherein the training sample set comprises a plurality of interview corpus texts;
dividing the sentence text in the training sample set into a short sentence set with a preset word segmentation quantity, and coding the word segmentation in the short sentence set to obtain a word segmentation matrix;
performing convolution calculation on the word segmentation matrix to obtain a target matrix, and taking the dot product of the target matrix and a parameter matrix as an output matrix of a first neural network;
and acquiring a prediction vector corresponding to the covered word segmentation in the output matrix, and calculating the cross entropy loss of the prediction vector and a real vector actually corresponding to the covered word as a first loss.
In one possible implementation manner of the first aspect, before inputting the text information to the trained first neural network model, the method includes:
inputting the output matrix into a second neural network model, carrying out bidirectional convolution calculation on the output matrix by the second neural network model, and outputting the probability that each participle in the output matrix is covered;
and calculating cross entropy losses corresponding to all the masked participles in the probability matrix as second losses.
In one possible implementation manner of the first aspect, the method includes:
after the training of the first neural network is completed according to the preset iterative training times, the iterative training is carried out on the second neural network model according to the output matrix of the first neural network and the training sample set and the preset training times of the second neural network model, and the parameter matrix of the second neural network model is adjusted.
In one possible implementation manner of the first aspect, the method includes:
and performing interactive training on the first neural network model and the second neural network model, and adjusting the parameter matrix to respectively obtain a first target parameter matrix of the first neural network model and a second target parameter matrix of the second neural network model.
In a second aspect, an embodiment of the present application provides a scoring apparatus based on semantic analysis, including:
the acquisition unit is used for acquiring voice information of a target user and converting the voice information into text information;
the processing unit is used for inputting the text information into the trained first neural network model, and performing semantic analysis on the text information to obtain an output text classification result of the first neural network model; the text classification result comprises a scoring label corresponding to the text information, the first neural network model is obtained by training based on a training sample set and a second neural network model, the second neural network model is obtained by training based on the training sample set and an output result of the first neural network model, the output result of the first neural network model is obtained by taking the training sample set as input, and the training sample set comprises a plurality of interview corpus texts;
and the scoring unit is used for calculating an interview scoring result of the target user according to the scoring label.
In a third aspect, an embodiment of the present application provides a terminal device, including: a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the scoring method based on semantic analysis according to any one of the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, where a computer program is stored, and when executed by a processor, the computer program implements the semantic analysis-based scoring method according to any one of the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer program product, which, when run on a terminal device, causes the terminal device to execute the scoring method based on semantic analysis according to any one of the above first aspects.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Compared with the prior art, the embodiment of the application has the advantages that: according to the embodiment of the application, the voice information of the target user is obtained, and the voice information is converted into text information; inputting the text information into a trained first neural network model, and performing semantic analysis on the text information to obtain an output text classification result of the first neural network model; the text classification result comprises a scoring label corresponding to the text information; calculating an interview scoring result of the target user according to the scoring label; the method has the advantages that rapid and accurate scoring of all dimension capability points of the target user is achieved according to the answer of the target user in an intelligent interview scene, and interview efficiency and interview scoring accuracy are improved; has strong usability and practicability.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application;
FIG. 2 is a flow chart of a scoring method based on semantic recognition according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart of speech model training according to another embodiment of the present application;
FIG. 4 is a schematic structural diagram of a scoring apparatus based on semantic analysis according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
At present, in an intelligent interview conversation scene, particularly an application scene with a large recruitment amount, voice information in a conversation process of an interviewee is received through a microphone of a terminal device, answers of the interviewee are scored based on semantic analysis of the voice information, each dimensionality capability of the interviewee is evaluated, and interview efficiency is improved.
As shown in fig. 1, the interviewee is a user, the terminal device may present questions of a plurality of feature dimensions to the user in a text or voice form, receive answers from the user, score the answers from the user based on semantic analysis, and finally obtain capability scores of each feature dimension of the user.
The terminal device may be a mobile phone, a notebook computer, an ultra-personal computer 30 (UMPC), or other terminal devices; but not limited to, a tablet computer, a wearable device, an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a netbook, a Personal Digital Assistant (PDA), etc. may also be included, and the embodiment of the present application does not set any limitation to a specific type of a carrier of a client, that is, a terminal device.
Fig. 2 is a schematic view of an implementation flow of a scoring method based on semantic analysis provided in an embodiment of the present application, where the method includes:
step S201, acquiring voice information of a target user, and converting the voice information into text information.
In this embodiment, the target user may be a person to be interviewed, and the terminal device may serve as a role of an interviewer to provide multiple problems for the target user; the terminal equipment receives the voice information of the target user to realize the session scene of the intelligent interview.
In some embodiments, the obtaining voice information of the target user and converting the voice information into text information includes:
a1, recognizing the voice information through a voice recognition algorithm, and extracting acoustic features in the voice information;
and A2, converting the voice information into text information according to the acoustic characteristics.
In the embodiment of the application, in a conversation scene of an intelligent interview, terminal equipment can receive voice information in a conversation process of a target user through a microphone, recognize the voice information through a voice recognition algorithm, extract acoustic features of voice, acquire phoneme information of the voice information, and convert the voice information into text information by corresponding the phoneme information with words or phrases in a dictionary.
In some embodiments, before said inputting said text information to said trained first neural network model, comprises:
dividing the text information according to the number of preset participles to obtain at least one short sentence text which accords with the number of the preset participles;
or setting the number of the longest short sentences in the process of converting the voice information into the text information, dividing the voice information into at least one voice short sentence with the number less than or equal to the number of the longest short sentences, and converting the at least one voice short sentence into the text information.
Specifically, the terminal device divides the text information according to the preset word segmentation quantity to obtain a plurality of short sentence texts meeting the preset word segmentation quantity; or in the process of converting the voice information into the text information, setting the number of the longest short sentences, dividing the voice information into a plurality of voice short sentences the number of which is less than or equal to the number of the longest short sentences, and converting the plurality of voice short sentences into corresponding text information. Therefore, when semantic recognition is carried out on the text information in the follow-up process, the size of the used target parameter matrix is kept consistent before and after, and data processing of the terminal equipment is facilitated.
It should be noted that, in an application scenario of an actual session process, a corresponding relationship between the text information and a current session topic is established, so that a more accurate and reliable basis is provided for subsequent classification of the text information, and a interviewer is scored more accurately according to voice information in an intelligent interview process.
Step S202, inputting the text information into a trained first neural network model, and performing semantic analysis on the text information to obtain an output text classification result of the first neural network model; and the text classification result comprises a scoring label corresponding to the text information.
In this embodiment, the first neural network model is a language model, performs semantic recognition on the text information, and classifies the text information according to the recognized semantics to obtain a score tag of a classification result corresponding to the text information.
Specifically, in the process of semantic recognition of the text information, the terminal device divides the sentence corresponding to the text information into a plurality of words or characters; converting the divided words or characters into vector matrix representation, and performing semantic understanding through a semantic recognition algorithm; and classifying the text information according to the semantics, and outputting a text classification result corresponding to the text information.
The first neural network model is obtained by training based on a training sample set and a second neural network model, the second neural network model is obtained by training based on the training sample set and the output result of the first neural network model, the output result of the first neural network model is obtained by taking the training sample set as input, and the training sample set comprises a plurality of interview corpus texts.
Referring to fig. 3, a flowchart of a training method of a speech recognition model provided in the embodiment of the present application is illustrated, where before inputting the text information into the trained first neural network model, a training process of the model includes:
step S301, acquiring a training sample set, wherein the training sample set comprises a plurality of interview corpus texts;
specifically, the training sample set comprises multi-dimensional interview corpus texts, and the first neural network model is subjected to multi-dimensional training, so that multi-dimensional classification is performed on voice information input by a target user, and accordingly multi-dimensional capability of the target user is scored.
Step S302, dividing the sentence text in the training sample set into a short sentence set with a preset word segmentation quantity, and coding the words in the short sentence set to obtain a word segmentation matrix;
the terminal device divides the sentence text in the training sample set according to the preset number of the participles to obtain a short sentence set smaller than or equal to the preset number of the participles, for example, dividing the short sentence set into { "previous", "several days", "weather", "always", "not good", "don", "hard", "today", "weather", "not wrong", "not good", "very", "suitable", "step on", and adding 14 participles with punctuation marks, so that the preset number of the participles can be 14, and different participle number thresholds can be set according to the size of the model. And coding each participle to obtain a coded participle matrix, wherein each row of the matrix identifies a representation vector of each participle, for example, if the sentence text comprises 14 participles, the participle matrix comprises 14 rows. Specifically, taking the above sentence text as an example, a segmentation matrix M of 14 × 100 dimensions is obtained by encoding the segmentation words in the short sentence set, and Mi is denoted as the ith row of the segmentation matrix M.
Step S303, carrying out convolution calculation on the word segmentation matrix to obtain a target matrix, and taking the dot product of the target matrix and a parameter matrix as an output matrix of a first neural network;
specifically, before the process of performing convolution calculation on the segmentation matrix, one or more segmentation words in the phrase set are randomly masked, that is, one of the segmentation words is encoded as an unknown quantity, which is described by taking the segmentation matrix M as an example, and the 5 th word "not good" and the 9 th word "not good" are masked and then used as the input of the first neural network model. Performing convolution calculation on an input word segmentation matrix, taking the first row of the word segmentation matrix M as an example, performing vector dot product operation on M1 and M1-M14 respectively to obtain r 1-r 14, wherein r 1-r 14 are scalar numerical values; let r1 × M1+ r2 × M2+. n 4 × M14 be P1, P1 is a vector of 100 dimensions. And calculating each row of the word segmentation matrix M according to the operation process of the first row, updating M1-M14 into P1-P14, and combining the vectors P1-P14 into a matrix P with 14 x 100 dimensions. In order to enable the first neural network model to learn more semantics, performing convolution calculation on the matrix P again according to the operation on the matrix M to obtain a matrix S, and performing convolution calculation on the matrix S again according to the operation on the matrix M to obtain a matrix K, wherein the size of the matrix K is 14 × 100. Setting a parameter matrix according to the size of a dictionary of the first neural network model and the number of preset participles; for example, when the dictionary size of the first neural network model is 2000 for the matrix K obtained by the above convolution calculation, the size of the parameter matrix Q is set to 100 × 2000, the matrix T having a size of 14 × 2000 is obtained by changing K × Q to T, and the matrix T is used as the output matrix of the first neural network.
Step S304, obtaining a prediction vector corresponding to the masked word segmentation in the output matrix, and calculating a cross entropy loss between the prediction vector and a real vector actually corresponding to the masked word as a first loss.
Specifically, for example, the prediction vectors corresponding to the 5 th row and the 9 th row in the matrix T and the true vectors corresponding to the masked words "bad" and "good" are used to calculate the cross entropy Loss of the two, which is used as the first Loss pass 1.
In some embodiments, before inputting the text information to the trained first neural network model, the method further comprises:
and B1, inputting the output matrix into a second neural network model, carrying out bidirectional convolution calculation on the output matrix by the second neural network model, and outputting the probability that each participle in the output matrix is covered.
Specifically, the second neural network model is a sequence labeling model, an output matrix output by the first neural network model is used as input, and the covered probability and the uncovered probability of the participle corresponding to each row vector in the output matrix are calculated, so that the recognition and labeling of each participle in the output matrix are realized, and the semantic analysis of the first neural network model is more accurate.
Performing convolution calculation on a bidirectional LSTM layer of the second neural network model, splicing results of the bidirectional calculation, and inputting the spliced results into an output layer of the second neural network model; the output layer carries out linear transformation on the vector corresponding to each participle of the bidirectional LSTM layer; for example, taking the output matrix T as an example, after the bidirectional LSTM layer and the output layer are subjected to linear transformation, the output of the first participle obtained is a vector Y1 with 100 dimensions, a parameter matrix G with a size of 100 × 2 is set, and the output of the first participle of the output layer is obtained through Y1 × G ═ C1; where C1 is a 2-dimensional vector, the first element in the 2-dimensional vector represents the probability that the participle is masked, and the second element represents the probability that the participle is not masked. Based on the same operation, 2-dimensional vectors C1-C14 corresponding to all the participles can be obtained, and the hidden probability matrix C corresponding to all the participles is output.
And B2, calculating the cross entropy loss corresponding to all the masked participles in the probability matrix as a second loss.
Specifically, the second Loss2 is sum { cross entropy Loss (whether the ith word is masked, Ci) }, i is 1, 2, 3, and.
In one embodiment, the Loss of the first neural network model is defined as Loss1-Loss2, and the better the recognition effect of the second neural network model is, which indicates that the second neural network model easily finds out which words in the output matrix of the first neural network model are covered, i.e. indicates that the difference between the participles or semantics analyzed by the first neural network model and the actual semantics is larger.
In one embodiment, the first neural network model and the second neural network model are interactively trained, the parameter matrixes of the first neural network model and the second neural network model are respectively initialized randomly, namely the parameter matrixes with preset sizes are defined, and the parameter matrixes are set with preset initial values. And respectively carrying out the training in rounds on the first neural network model and the second neural network model according to the times of the iterative training. The first wheel carries out iterative training on the first neural network model, the parameter matrix of the first neural network model is adjusted, the second neural network model does not carry out iterative training, the probability that each word segmentation is covered in the output matrix of the first neural network model is calculated only through the second neural network model, and the second loss is obtained through calculation. And performing iterative training on the first neural network model according to the second loss and the first loss, and adjusting a parameter matrix of the first neural network model.
In one embodiment, after the training of the first neural network is completed according to the preset iterative training times, the iterative training is performed on the second neural network model according to the preset training times of the second neural network model and the output matrix of the first neural network and the training sample set, and the parameter matrix of the second neural network model is adjusted.
And performing interactive training on the first neural network model and the second neural network model, and adjusting the parameter matrix to respectively obtain a first target parameter matrix of the first neural network model and a second target parameter matrix of the second neural network model.
The number of iterative training may be set by the amount of local data, for example, total L sentence data, and N data is set for each training, so that the number of iterative training is L/N, and N is generally set to 128.
In one embodiment, after the first neural network model is subjected to iterative training, a scoring parameter matrix is set according to a scoring level, for example, a scoring parameter matrix with the size of 2000 × 5 is set for the output matrix T, the output matrix T is multiplied by the scoring parameter matrix to obtain a predicted scoring tag S (T × U ═ S) corresponding to the input sentence text, cross entropy loss between the predicted scoring tag and a real scoring tag is calculated, the first neural network model is continuously subjected to iterative training through the cross entropy loss, the scoring parameter matrix is adjusted to obtain a target scoring parameter matrix, and the first neural network model using the target scoring parameter matrix is used as a model for performing semantic recognition and text classification on voice information input by a target user. And multiplying the output matrix of the first neural network model by the target prediction scoring tag to obtain the probability of each score level in the scoring tag levels, and taking the score level with the highest probability as the scoring result of the session.
Specifically, the scoring tag sets a score level tag for the text classification result, so that the ability level of the target user can be determined according to the scoring tag, the scoring tag can be set to, for example, scores of 1, 2, 3, 4, and 5, and the scoring result of the current session scene is determined according to the scoring tag corresponding to the text classification result.
According to the embodiment of the application, the first neural network model and the second neural network model are interactively trained, the second neural network model is used for judging whether the output of the first neural network model is real and reasonable, and the loss of the second neural network model is added into the first neural network model and is used as a reference index for carrying out iterative training on the first neural network model; the closer the output of the first neural network model is to the real semantics, the harder the second neural network model is to accurately judge whether the semantics output by the first neural network model are wrong or not, and further the iterative training of the second neural network model is promoted. Through iterative training of the second neural network model, the authenticity of an output result in the first neural network model can be judged more accurately, and further the output of the first neural network model can be closer to real semantics. In addition, the two models are trained simultaneously in the training process, and only the trained first neural network model is used in the actual application process, so that when the semantic analysis unit is deployed in the terminal equipment, the parameter quantity is greatly reduced, the inference speed of the model is greatly improved, the storage space occupied by the model is reduced, and the processing performance of the terminal equipment is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 4 shows a structural block diagram of a scoring device based on semantic analysis provided in the embodiment of the present application, corresponding to the scoring method based on semantic analysis described in the above embodiment, and for convenience of explanation, only the parts related to the embodiment of the present application are shown.
Referring to fig. 4, the apparatus includes:
an obtaining unit 41, configured to obtain voice information of a target user, and convert the voice information into text information;
the processing unit 42 is configured to input the text information to the trained first neural network model, perform semantic analysis on the text information, and obtain an output text classification result of the first neural network model; the text classification result comprises a scoring label corresponding to the text information, the first neural network model is obtained by training based on a training sample set and a second neural network model, the second neural network model is obtained by training based on the training sample set and an output result of the first neural network model, the output result of the first neural network model is obtained by taking the training sample set as input, and the training sample set comprises a plurality of interview corpus texts;
and the scoring unit 43 is configured to calculate an interview scoring result of the target user according to the scoring tag.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 5, the terminal device 5 of this embodiment includes: at least one processor 50 (only one shown in fig. 5), a memory 51, and a computer program 52 stored in the memory 51 and executable on the at least one processor 50, wherein the processor 50 implements the steps of any of the above-mentioned respective semantic analysis based scoring method embodiments when executing the computer program 52.
The terminal device 5 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 50, a memory 51. Those skilled in the art will appreciate that fig. 5 is only an example of the terminal device 5, and does not constitute a limitation to the terminal device 5, and may include more or less components than those shown, or combine some components, or different components, such as an input-output device, a network access device, and the like.
The Processor 50 may be a Central Processing Unit (CPU), and the Processor 50 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may in some embodiments be an internal storage unit of the terminal device 5, such as a hard disk or a memory of the terminal device 5. The memory 51 may also be an external storage device of the terminal device 5 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the terminal device 5. The memory 51 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 51 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), random-access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A scoring method based on semantic analysis is characterized by comprising the following steps:
acquiring voice information of a target user, and converting the voice information into text information;
inputting the text information into a trained first neural network model, and performing semantic analysis on the text information to obtain an output text classification result of the first neural network model; the text classification result comprises a scoring label corresponding to the text information, the first neural network model is obtained by training based on a training sample set and a second neural network model, the second neural network model is obtained by training based on the training sample set and an output result of the first neural network model, the output result of the first neural network model is obtained by taking the training sample set as input, and the training sample set comprises a plurality of interview corpus texts;
and calculating an interview scoring result of the target user according to the scoring label.
2. The method of claim 1, wherein the obtaining voice information of the target user and converting the voice information into text information comprises:
recognizing the voice information through a voice recognition algorithm, and extracting acoustic features in the voice information;
and converting the voice information into text information according to the acoustic characteristics.
3. The method of claim 1, prior to said inputting said textual information to said trained first neural network model, comprising:
dividing the text information according to the number of preset participles to obtain at least one short sentence text which accords with the number of the preset participles;
or setting the number of the longest short sentences in the process of converting the voice information into the text information, dividing the voice information into at least one voice short sentence with the number less than or equal to the number of the longest short sentences, and converting the at least one voice short sentence into the text information.
4. The method of claim 1, prior to said inputting said textual information to said trained first neural network model, comprising:
acquiring a training sample set, wherein the training sample set comprises a plurality of interview corpus texts;
dividing the sentence text in the training sample set into a short sentence set with a preset word segmentation quantity, and coding the word segmentation in the short sentence set to obtain a word segmentation matrix;
performing convolution calculation on the word segmentation matrix to obtain a target matrix, and taking the dot product of the target matrix and a parameter matrix as an output matrix of a first neural network;
and acquiring a prediction vector corresponding to the covered word segmentation in the output matrix, and calculating the cross entropy loss of the prediction vector and a real vector actually corresponding to the covered word as a first loss.
5. The method of claim 4, prior to inputting the textual information to the trained first neural network model, comprising:
inputting the output matrix into a second neural network model, carrying out bidirectional convolution calculation on the output matrix by the second neural network model, and outputting the probability that each participle in the output matrix is covered;
and calculating cross entropy losses corresponding to all the masked participles in the probability matrix as second losses.
6. The method of claim 4, wherein the method comprises:
after the training of the first neural network is completed according to the preset iterative training times, the iterative training is carried out on the second neural network model according to the output matrix of the first neural network and the training sample set and the preset training times of the second neural network model, and the parameter matrix of the second neural network model is adjusted.
7. The method of claim 6, wherein the method comprises:
and performing interactive training on the first neural network model and the second neural network model, and adjusting the parameter matrix to respectively obtain a first target parameter matrix of the first neural network model and a second target parameter matrix of the second neural network model.
8. A scoring device based on semantic analysis, comprising:
the acquisition unit is used for acquiring voice information of a target user and converting the voice information into text information;
the processing unit is used for inputting the text information into the trained first neural network model, and performing semantic analysis on the text information to obtain an output text classification result of the first neural network model; the text classification result comprises a scoring label corresponding to the text information, the first neural network model is obtained by training based on a training sample set and a second neural network model, the second neural network model is obtained by training based on the training sample set and an output result of the first neural network model, the output result of the first neural network model is obtained by taking the training sample set as input, and the training sample set comprises a plurality of interview corpus texts;
and the scoring unit is used for calculating an interview scoring result of the target user according to the scoring label.
9. A terminal device, comprising: memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202010469517.0A 2020-05-28 2020-05-28 Grading method and device based on semantic analysis, terminal equipment and storage medium Pending CN111695352A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010469517.0A CN111695352A (en) 2020-05-28 2020-05-28 Grading method and device based on semantic analysis, terminal equipment and storage medium
PCT/CN2020/119299 WO2021114840A1 (en) 2020-05-28 2020-09-30 Scoring method and apparatus based on semantic analysis, terminal device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010469517.0A CN111695352A (en) 2020-05-28 2020-05-28 Grading method and device based on semantic analysis, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111695352A true CN111695352A (en) 2020-09-22

Family

ID=72478509

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010469517.0A Pending CN111695352A (en) 2020-05-28 2020-05-28 Grading method and device based on semantic analysis, terminal equipment and storage medium

Country Status (2)

Country Link
CN (1) CN111695352A (en)
WO (1) WO2021114840A1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112162738A (en) * 2020-10-26 2021-01-01 广东粤港澳大湾区硬科技创新研究院 Data conversion method and device, terminal equipment and storage medium
CN112632222A (en) * 2020-12-25 2021-04-09 海信视像科技股份有限公司 Terminal equipment and method for determining data belonging field
CN112669820A (en) * 2020-12-16 2021-04-16 平安科技(深圳)有限公司 Examination cheating recognition method and device based on voice recognition and computer equipment
CN112699237A (en) * 2020-12-24 2021-04-23 百度在线网络技术(北京)有限公司 Label determination method, device and storage medium
CN112836508A (en) * 2021-01-29 2021-05-25 平安科技(深圳)有限公司 Information extraction model training method and device, terminal equipment and storage medium
CN112966712A (en) * 2021-02-01 2021-06-15 北京三快在线科技有限公司 Language model training method and device, electronic equipment and computer readable medium
WO2021114840A1 (en) * 2020-05-28 2021-06-17 平安科技(深圳)有限公司 Scoring method and apparatus based on semantic analysis, terminal device, and storage medium
CN113095165A (en) * 2021-03-23 2021-07-09 北京理工大学深圳研究院 Simulation interview method and device for perfecting interview performance
CN113343666A (en) * 2021-06-29 2021-09-03 深圳前海微众银行股份有限公司 Method, device and equipment for determining confidence degree of score and storage medium
CN113343711A (en) * 2021-06-29 2021-09-03 南方电网数字电网研究院有限公司 Work order generation method, device, equipment and storage medium
CN113420533A (en) * 2021-07-09 2021-09-21 中铁七局集团有限公司 Training method and device of information extraction model and electronic equipment
CN113792140A (en) * 2021-08-12 2021-12-14 南京星云数字技术有限公司 Text processing method and device and computer readable storage medium
CN113808709A (en) * 2021-08-31 2021-12-17 天津师范大学 Text analysis-based psychoelasticity prediction method and system
CN113902404A (en) * 2021-09-29 2022-01-07 平安银行股份有限公司 Employee promotion analysis method, device, equipment and medium based on artificial intelligence
WO2022141875A1 (en) * 2020-12-30 2022-07-07 平安科技(深圳)有限公司 User intention recognition method and apparatus, device, and computer-readable storage medium

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115249017B (en) * 2021-06-23 2023-12-19 马上消费金融股份有限公司 Text labeling method, training method of intention recognition model and related equipment
CN113643781B (en) * 2021-06-25 2023-07-04 合肥工业大学 Personalized recommendation method and system for health intervention scheme based on time sequence early warning signal
CN113609851A (en) * 2021-07-09 2021-11-05 浙江连信科技有限公司 Psychological idea cognitive deviation identification method and device and electronic equipment
CN113470629B (en) * 2021-07-16 2024-01-09 腾讯音乐娱乐科技(深圳)有限公司 Audio recognition model training method and tone similarity detection method
CN113590820A (en) * 2021-07-16 2021-11-02 杭州网易智企科技有限公司 Text processing method, device, medium and electronic equipment
CN113849785B (en) * 2021-07-29 2024-01-30 国家计算机网络与信息安全管理中心 Mobile terminal information asset use behavior identification method for application program
CN113609861B (en) * 2021-08-10 2024-02-23 北京工商大学 Multi-dimensional feature named entity recognition method and system based on food literature data
CN113782142B (en) * 2021-08-20 2024-04-16 中国中医科学院中医药信息研究所 Traditional Chinese medicine decoction piece formula recommendation system based on integrated neural network
CN113837294B (en) * 2021-09-27 2023-09-01 平安科技(深圳)有限公司 Model training and calling method and device, computer equipment and storage medium
CN113821603A (en) * 2021-09-29 2021-12-21 平安普惠企业管理有限公司 Recording information processing method, apparatus, device and storage medium
CN114330512B (en) * 2021-12-13 2024-04-26 腾讯科技(深圳)有限公司 Data processing method, device, electronic equipment and computer readable storage medium
CN114548787B (en) * 2022-02-23 2024-04-12 中国平安人寿保险股份有限公司 User-generated content management method, device, electronic equipment and storage medium
CN114523476B (en) * 2022-03-02 2024-02-20 北京云迹科技股份有限公司 Control method and device of service robot
CN114595756A (en) * 2022-03-04 2022-06-07 阿里巴巴(中国)有限公司 Training method and device for improving generalization capability of text analysis model
CN114780723B (en) * 2022-04-08 2024-04-02 浙江师范大学 Portrayal generation method, system and medium based on guide network text classification
CN115905518B (en) * 2022-10-17 2023-10-20 华南师范大学 Emotion classification method, device, equipment and storage medium based on knowledge graph
ES2933625A1 (en) * 2022-10-29 2023-02-10 Kallisto Ai Sl METHOD AND SYSTEM USING GENERAL ARTIFICIAL INTELLIGENCE TECHNIQUES FOR USER SEGMENTATION (Machine-translation by Google Translate, not legally binding)
CN116245154A (en) * 2022-11-30 2023-06-09 荣耀终端有限公司 Training method of neural network, public opinion crisis recognition method and related device
CN115658853B (en) * 2022-12-28 2023-04-11 中国气象局公共气象服务中心(国家预警信息发布中心) Natural language processing-based meteorological early warning information auditing method and system
CN116205221B (en) * 2023-05-05 2023-07-14 北京睿企信息科技有限公司 Method, storage medium and computer device for entity recognition and text classification
CN116631583A (en) * 2023-05-30 2023-08-22 华脑科学研究(珠海横琴)有限公司 Psychological dispersion method, device and server based on big data of Internet of things
CN116776744B (en) * 2023-08-15 2023-10-31 工业云制造(四川)创新中心有限公司 Equipment manufacturing control method based on augmented reality and electronic equipment
CN117074643A (en) * 2023-08-21 2023-11-17 华院计算技术(上海)股份有限公司 Coal quality evaluation method, system, equipment and medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241524B (en) * 2018-08-13 2022-12-20 腾讯科技(深圳)有限公司 Semantic analysis method and device, computer-readable storage medium and electronic equipment
US11315550B2 (en) * 2018-11-19 2022-04-26 Panasonic Intellectual Property Corporation Of America Speaker recognition device, speaker recognition method, and recording medium
CN110210032B (en) * 2019-05-31 2023-10-31 鼎富智能科技有限公司 Text processing method and device
CN110310632A (en) * 2019-06-28 2019-10-08 联想(北京)有限公司 Method of speech processing and device and electronic equipment
CN110717023B (en) * 2019-09-18 2023-11-07 平安科技(深圳)有限公司 Method and device for classifying interview answer text, electronic equipment and storage medium
CN111695352A (en) * 2020-05-28 2020-09-22 平安科技(深圳)有限公司 Grading method and device based on semantic analysis, terminal equipment and storage medium

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021114840A1 (en) * 2020-05-28 2021-06-17 平安科技(深圳)有限公司 Scoring method and apparatus based on semantic analysis, terminal device, and storage medium
CN112162738A (en) * 2020-10-26 2021-01-01 广东粤港澳大湾区硬科技创新研究院 Data conversion method and device, terminal equipment and storage medium
CN112162738B (en) * 2020-10-26 2022-11-29 广东粤港澳大湾区硬科技创新研究院 Data conversion method and device, terminal equipment and storage medium
CN112669820B (en) * 2020-12-16 2023-08-04 平安科技(深圳)有限公司 Examination cheating recognition method and device based on voice recognition and computer equipment
CN112669820A (en) * 2020-12-16 2021-04-16 平安科技(深圳)有限公司 Examination cheating recognition method and device based on voice recognition and computer equipment
CN112699237B (en) * 2020-12-24 2021-10-15 百度在线网络技术(北京)有限公司 Label determination method, device and storage medium
CN112699237A (en) * 2020-12-24 2021-04-23 百度在线网络技术(北京)有限公司 Label determination method, device and storage medium
CN112632222B (en) * 2020-12-25 2023-02-03 海信视像科技股份有限公司 Terminal equipment and method for determining data belonging field
CN112632222A (en) * 2020-12-25 2021-04-09 海信视像科技股份有限公司 Terminal equipment and method for determining data belonging field
WO2022141875A1 (en) * 2020-12-30 2022-07-07 平安科技(深圳)有限公司 User intention recognition method and apparatus, device, and computer-readable storage medium
CN112836508B (en) * 2021-01-29 2023-04-14 平安科技(深圳)有限公司 Information extraction model training method and device, terminal equipment and storage medium
CN112836508A (en) * 2021-01-29 2021-05-25 平安科技(深圳)有限公司 Information extraction model training method and device, terminal equipment and storage medium
CN112966712B (en) * 2021-02-01 2023-01-20 北京三快在线科技有限公司 Language model training method and device, electronic equipment and computer readable medium
CN112966712A (en) * 2021-02-01 2021-06-15 北京三快在线科技有限公司 Language model training method and device, electronic equipment and computer readable medium
CN113095165A (en) * 2021-03-23 2021-07-09 北京理工大学深圳研究院 Simulation interview method and device for perfecting interview performance
CN113343666B (en) * 2021-06-29 2023-07-14 深圳前海微众银行股份有限公司 Method, device, equipment and storage medium for determining confidence of score
CN113343666A (en) * 2021-06-29 2021-09-03 深圳前海微众银行股份有限公司 Method, device and equipment for determining confidence degree of score and storage medium
CN113343711A (en) * 2021-06-29 2021-09-03 南方电网数字电网研究院有限公司 Work order generation method, device, equipment and storage medium
CN113343711B (en) * 2021-06-29 2024-05-10 南方电网数字电网研究院有限公司 Work order generation method, device, equipment and storage medium
CN113420533A (en) * 2021-07-09 2021-09-21 中铁七局集团有限公司 Training method and device of information extraction model and electronic equipment
CN113420533B (en) * 2021-07-09 2023-12-29 中铁七局集团有限公司 Training method and device of information extraction model and electronic equipment
CN113792140A (en) * 2021-08-12 2021-12-14 南京星云数字技术有限公司 Text processing method and device and computer readable storage medium
CN113808709A (en) * 2021-08-31 2021-12-17 天津师范大学 Text analysis-based psychoelasticity prediction method and system
CN113808709B (en) * 2021-08-31 2024-03-22 天津师范大学 Psychological elasticity prediction method and system based on text analysis
CN113902404A (en) * 2021-09-29 2022-01-07 平安银行股份有限公司 Employee promotion analysis method, device, equipment and medium based on artificial intelligence

Also Published As

Publication number Publication date
WO2021114840A1 (en) 2021-06-17

Similar Documents

Publication Publication Date Title
CN111695352A (en) Grading method and device based on semantic analysis, terminal equipment and storage medium
CN111046133A (en) Question-answering method, question-answering equipment, storage medium and device based on atlas knowledge base
CN111858843B (en) Text classification method and device
CN113094478B (en) Expression reply method, device, equipment and storage medium
CN111291172A (en) Method and device for processing text
CN114298121A (en) Multi-mode-based text generation method, model training method and device
CN112417855A (en) Text intention recognition method and device and related equipment
CN111767714B (en) Text smoothness determination method, device, equipment and medium
CN112784066A (en) Information feedback method, device, terminal and storage medium based on knowledge graph
CN111339775A (en) Named entity identification method, device, terminal equipment and storage medium
CN114218945A (en) Entity identification method, device, server and storage medium
CN112632248A (en) Question answering method, device, computer equipment and storage medium
CN111695335A (en) Intelligent interviewing method and device and terminal equipment
CN110275953B (en) Personality classification method and apparatus
CN113221553A (en) Text processing method, device and equipment and readable storage medium
CN111368066B (en) Method, apparatus and computer readable storage medium for obtaining dialogue abstract
CN114281996A (en) Long text classification method, device, equipment and storage medium
CN112597299A (en) Text entity classification method and device, terminal equipment and storage medium
CN116844573A (en) Speech emotion recognition method, device, equipment and medium based on artificial intelligence
CN111859933A (en) Training method, recognition method, device and equipment of Malay recognition model
CN111401069A (en) Intention recognition method and intention recognition device for conversation text and terminal
CN110852066A (en) Multi-language entity relation extraction method and system based on confrontation training mechanism
CN114417891A (en) Reply sentence determination method and device based on rough semantics and electronic equipment
CN114722832A (en) Abstract extraction method, device, equipment and storage medium
CN110347813B (en) Corpus processing method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40032339

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination