CN113887930A - Question-answering robot health degree evaluation method, device, equipment and storage medium - Google Patents

Question-answering robot health degree evaluation method, device, equipment and storage medium Download PDF

Info

Publication number
CN113887930A
CN113887930A CN202111150154.5A CN202111150154A CN113887930A CN 113887930 A CN113887930 A CN 113887930A CN 202111150154 A CN202111150154 A CN 202111150154A CN 113887930 A CN113887930 A CN 113887930A
Authority
CN
China
Prior art keywords
question
text
answer
texts
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111150154.5A
Other languages
Chinese (zh)
Other versions
CN113887930B (en
Inventor
高静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Bank Co Ltd
Original Assignee
Ping An Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Bank Co Ltd filed Critical Ping An Bank Co Ltd
Priority to CN202111150154.5A priority Critical patent/CN113887930B/en
Publication of CN113887930A publication Critical patent/CN113887930A/en
Application granted granted Critical
Publication of CN113887930B publication Critical patent/CN113887930B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/232Orthographic correction, e.g. spell checking or vowelisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking

Abstract

The invention relates to an artificial intelligence technology, and discloses a question-answering robot health degree assessment method, which comprises the following steps: acquiring a question and answer text, a man-machine interaction text and a user score of a preset question and answer robot, and performing wrongly written character inspection and semantic definition inspection on the question and answer text to obtain the error rate and the semantic definition rate of the question and answer text; calculating a repeated text in the human-computer interaction text and a matching value between each question and a corresponding answer in the human-computer interaction text to obtain a repetition rate and a question-answer matching value; calculating the user rating according to a budget rule to obtain a poor rating; and carrying out weight calculation on the error rate, the semantic clarity rate, the repetition rate, the question-answer matching value and the poor evaluation rate to obtain the health degree score. In addition, the invention also relates to a block chain technology, such as a question and answer text which can be stored in the nodes of the block chain. The invention also provides a device, equipment and medium for evaluating the health degree of the question answering robot. The method and the device can improve the accuracy of the health degree evaluation of the robot.

Description

Question-answering robot health degree evaluation method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a question-answering robot health degree assessment method and device, electronic equipment and a computer readable storage medium.
Background
With the development of artificial intelligence, more and more enterprises and companies use intelligent robots instead of manually performing some boring work. For example, when a user consults a business question, a question-answering robot is used to analyze and answer the question of the user instead of a customer service person, and communication with the user is maintained. However, with the change of user demands and the rapid development of the times, people have higher and higher requirements on the service quality of the question-answering robot, and therefore, it is important to evaluate the condition and the capability of the question-answering robot at regular time so as to optimize and update the question-answering robot according to the evaluation result. Currently, most of the question and answer robots adopt user evaluation as an evaluation standard or utilize a knowledge base to evaluate knowledge storage of the question and answer robots, and the evaluation standard is not comprehensive.
Disclosure of Invention
The invention provides a question-answering robot health degree evaluation method and device and a computer readable storage medium, and mainly aims to solve the problem that the robot health degree evaluation is inaccurate.
In order to achieve the above object, the present invention provides a method for assessing the health degree of a question and answer robot, comprising:
acquiring a question and answer text of a preset question and answer robot, carrying out wrongly written character inspection on the question and answer text, and calculating the error rate of the question and answer text according to the number of wrongly written characters;
extracting text semantics of the question and answer text, performing semantic definition inspection on the text semantics, and calculating the semantic definition of the question and answer text according to the result of the semantic definition inspection;
acquiring a human-computer interaction text of the question-answering robot, extracting a machine text in the human-computer interaction text, counting repeated texts in the machine text, and calculating the repetition rate of the human-computer interaction text according to the number of the repeated texts;
selecting one of the human-computer interaction texts as a target text one by one from the human-computer interaction texts, calculating a matching value between the semantics of a question in the target text and the semantics of an answer corresponding to the question, and calculating a question-answer matching value of the interaction text according to the matching value;
obtaining user scores, and calculating the poor rating rate of the user scores according to a preset rating rule;
and calculating the error rate, the semantic clarity rate, the repetition rate, the question-answer matching value and the poor evaluation rate by using a preset weight algorithm to obtain the question-answer robot health degree score.
Optionally, the performing a wrong word check on the question and answer text, and calculating an error rate of the question and answer text according to the number of the wrong words includes:
performing word segmentation processing on the question and answer text to obtain text word segmentation;
detecting the text participles by using a pre-constructed wrongly written character proofreading model to obtain a wrongly written character set;
and counting the number of wrongly-written characters in the wrongly-written character set, and calculating according to the number of wrongly-written characters and the number of the text participles to obtain the error rate of the question and answer text.
Optionally, the extracting the text semantics of the question and answer text and performing semantic definition inspection on the text semantics includes:
performing vector conversion on the text participles to obtain word vectors of the text participles;
carrying out weighted calculation on the word vector according to preset word segmentation weight to obtain a text vector;
and carrying out semantic definition inspection on the text vector by using a preset semantic processing model to obtain a semantic definition value.
Optionally, the performing semantic clarity test on the text vector by using a preset semantic processing model to obtain a semantic clarity value includes:
performing convolution and pooling on the text vector by using a preset semantic processing model to obtain low-dimensional feature expression of the text vector;
mapping the low-dimensional feature expression to a pre-constructed high-dimensional space by using a preset mapping function to obtain a high-dimensional feature expression of the text vector;
and calculating a feature output value of each feature in the high-dimensional feature expression by using a preset first activation function, and calculating according to the feature output value to obtain a semantic definition value.
Optionally, the extracting a machine text in the human-computer interaction text and counting repeated texts in the machine text, and calculating a repetition rate of the human-computer interaction text according to the number of the repeated texts includes:
classifying the human-computer interaction text by using a clustering algorithm to obtain a user text and the machine text;
extracting repeated texts of the machine texts to obtain the number of the repeated texts;
and taking the ratio of the number of repeated texts to the number of the machine texts as the repetition rate of the human-computer interaction texts.
Optionally, the selecting one of the human-computer interaction texts one by one from the human-computer interaction texts as a target text, and calculating a matching value between semantics of a question in the target text and semantics of an answer corresponding to the question includes:
extracting interactive texts in the human-computer interactive texts one by one according to a sequential relation to serve as target texts, wherein the interactive texts comprise questions and answers corresponding to the questions;
extracting semantics of a question in the target text and semantics of an answer corresponding to the question;
and calculating a distance value of the semantics of the question and the semantics of the answer corresponding to the question, and calculating to obtain a matching value according to the distance value.
Optionally, the extracting semantics of the question in the target text and semantics of the answer corresponding to the question includes:
performing word segmentation processing on the question in the target text and the answer corresponding to the question respectively to obtain a first text word segmentation and a second text word segmentation;
performing vector conversion on the first text participle and the second text participle to obtain a first participle vector and a second participle vector;
respectively constructing vector subset sets of the first word segmentation vector and the second word segmentation vector, and respectively performing feature extraction on the vector subset sets of the first word segmentation vector and the second word segmentation vector by utilizing a pre-constructed semantic analysis model to obtain a first feature subset and a second feature subset;
and calculating a vector output value of each vector in the first feature subset and the second feature subset by using a preset second activation function, and respectively selecting the feature vectors of which the vector output values are greater than a preset threshold value as the semantics of the question in the target text and the answer corresponding to the question.
In order to solve the above problem, the present invention also provides a question-answering robot health degree evaluation device, including:
the error rate evaluation module is used for acquiring a question and answer text of a preset question and answer robot, carrying out wrongly written character detection on the question and answer text and calculating the error rate of the question and answer text according to the number of wrongly written characters;
the semantic definition rate evaluation module is used for extracting the text semantics of the question and answer text, carrying out semantic definition inspection on the text semantics and calculating the semantic definition rate of the question and answer text according to the result of the semantic definition inspection;
the repetition rate evaluation module is used for acquiring a human-computer interaction text of the question-answering robot, extracting a machine text in the human-computer interaction text, counting repeated texts in the machine text, and calculating the repetition rate of the human-computer interaction text according to the number of the repeated texts;
the question-answer matching value evaluation module is used for selecting one of the human-computer interaction texts as a target text one by one from the human-computer interaction texts, calculating a matching value between the semantics of a question in the target text and the semantics of an answer corresponding to the question, and calculating the question-answer matching value of the interaction text according to the matching value;
the poor rating evaluation module is used for obtaining user scores and calculating the poor rating of the user scores according to a preset rating rule;
and the health degree scoring module is used for calculating the error rate, the semantic clarity rate, the repetition rate, the question-answer matching value and the poor evaluation rate by using a preset weight algorithm to obtain the health degree score of the question-answer robot.
In order to solve the above problem, the present invention also provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, and the computer program is executed by the at least one processor to enable the at least one processor to execute the question-answering robot health assessment method.
In order to solve the above problem, the present invention further provides a computer-readable storage medium, in which at least one computer program is stored, and the at least one computer program is executed by a processor in an electronic device to implement the method for assessing health of a question-answering robot described above.
According to the embodiment of the invention, the question and answer text, the man-machine interaction text and the user evaluation of the question and answer robot are evaluated in multiple aspects to obtain the evaluation indexes of error rate, speech definition, repetition rate, question and answer matching degree and poor evaluation rate, so that the original linguistic data and the man-machine interaction state of the question and answer robot are comprehensively evaluated, the evaluation indexes are more diversified, and the evaluation result is more accurate. Therefore, the question-answering robot health degree evaluation method, the question-answering robot health degree evaluation device, the electronic equipment and the computer readable storage medium can solve the problem that the robot health degree evaluation is inaccurate.
Drawings
Fig. 1 is a schematic flow chart of a method for assessing the health of a question-answering robot according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating an error rate calculation according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of calculating semantic clarity according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart illustrating a process for calculating a repetition rate according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a process of calculating a matching value according to an embodiment of the present invention;
fig. 6 is a functional block diagram of a health degree evaluation apparatus of a question-answering robot according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device for implementing the method for assessing the health degree of the question-answering robot according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the application provides a question and answer robot health degree evaluation method. The execution subject of the question-answering robot health degree evaluation method includes, but is not limited to, at least one of electronic devices such as a server and a terminal, which can be configured to execute the method provided by the embodiment of the application. In other words, the method for assessing the health degree of the question-answering robot may be executed by software or hardware installed in a terminal device or a server device, and the software may be a block chain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like. The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
Fig. 1 is a schematic flow chart of a method for assessing the health of a question-answering robot according to an embodiment of the present invention. In this embodiment, the method for assessing the health degree of the question-answering robot includes:
s1, obtaining a question and answer text of a preset question and answer robot, carrying out wrongly written character detection on the question and answer text, and calculating the error rate of the question and answer text according to the number of wrongly written characters;
in the embodiment of the present invention, the question and answer text includes the stored question and answer operation of the question and answer robot, for example, when the AI robot interacts with the user, the content of the voice text that the AI robot needs to recognize, the question asked by the AI robot for the user, the stored answer operation, and so on.
In the embodiment of the invention, computer sentences with data grabbing functions, such as java sentences, python sentences and the like, can be utilized. And grabbing the pre-stored question and answer text from a pre-constructed storage area, wherein the storage area comprises but is not limited to a database, a block chain node and a network cache.
Further, referring to fig. 2, the performing a wrong word check on the question and answer text and calculating the error rate of the question and answer text according to the number of the wrong words includes:
s11, performing word segmentation processing on the question and answer text to obtain text word segmentation;
s12, detecting the text participles by using a pre-constructed wrongly written character proofreading model to obtain a wrongly written character set;
s13, counting the number of wrongly-written characters in the wrongly-written character set, and calculating according to the number of wrongly-written characters and the number of text participles to obtain the error rate of the question and answer text.
In the embodiment of the invention, the wrongly written character proofreading model can be obtained by a sequence labeling model algorithm, such as HMM (hidden Markov model) training. For example, a confusing word text and a text participle generated from the text participles are used as a training data set for training, and the confusing word text can be easily wrongly written words, homophones, or spelling errors generated when simulating worker input, and the like captured on a network.
For example, the question and answer text is "to stroll to there, click and inquire", the word segmentation process is performed to obtain "to go", "visit", "stroll to there", to "," click me ", and" inquire ", the wrongly written word" there "can be obtained by using the wrongly written word collation model, and the" where "is a confusing word of" where ". The number of wrongly written texts is 1, the number of text participles is 6, and the error rate is 16.67%.
In the embodiment of the invention, the capability of answering the robot speech database can be displayed through the error rate index of the text, the problems which may occur in the interaction process of the question-answering robot and the user are predicted, and a basic index is provided for the final judgment of the robot.
S2, extracting text semantics of the question and answer text, performing semantic definition inspection on the text semantics, and calculating the semantic definition of the question and answer text according to the result of the semantic definition inspection;
in the embodiment of the invention, the text semantics can be special case word vectors obtained after semantic analysis, and the definition inspection refers to judging whether the analyzed text semantics can clearly express the intention of the question and answer text.
In the embodiment of the present invention, referring to fig. 3, the extracting text semantics of the question and answer text and performing semantic definition inspection on the text semantics includes:
s21, performing word segmentation on the question and answer text to obtain text word segmentation;
s22, carrying out vector conversion on the text participles to obtain word vectors of the text participles;
s23, carrying out weighted calculation on the word vectors according to preset word segmentation weights to obtain text vectors;
and S24, carrying out semantic definition inspection on the text vector by using a preset semantic processing model to obtain a semantic definition value.
In the embodiment of the invention, the Word vector can be obtained by training the text Word segmentation through a Word2Vec model, the weight model can be a TF-IDF model, and the importance of the text Word segmentation is estimated, so that different weights are obtained.
Further, the semantic clarity inspection is performed on the text vector by using a preset semantic processing model to obtain a semantic clarity value, including:
performing convolution and pooling on the text vector by using a preset semantic processing model to obtain low-dimensional feature expression of the text vector;
mapping the low-dimensional feature expression to a pre-constructed high-dimensional space by using a preset mapping function to obtain a high-dimensional feature expression of the text vector;
and calculating a feature output value of each feature in the high-dimensional feature expression by using a preset first activation function, and calculating according to the feature output value to obtain a semantic definition value.
In detail, the semantic Processing model includes, but is not limited to, a Natural Language Processing (NLP) model, a Latent Dirichlet Allocation (LDA) model, and the like.
Because the content of a sentence in the question and answer text is less, the semantic processing model is utilized to analyze the text participles, and the precision of semantic definition detection of the sentence in the question and answer text can be improved.
Specifically, the semantic processing model can be used for performing convolution, pooling and other processing on the context to reduce the data dimension of the text vector, and further extracting the data feature of the text vector; however, in the low-dimensional feature expression of the extracted text vector, an error feature may exist, which is not the feature of the text vector but is extracted by an error, so that the low-dimensional feature of the text vector can be mapped to a high-dimensional space by using a preset mapping function to obtain the high-dimensional feature expression of the text vector, and the accuracy of screening the extracted text feature is further improved, wherein the mapping function includes but is not limited to a gaussian function and a remap function.
For example, there is a low-dimensional feature expression expressed in two-dimensional coordinates (x, y), which can be mapped into a pre-constructed three-dimensional space by a preset function, resulting in a high-dimensional feature expression expressed in (x, y, z).
In the embodiment of the invention, the feature output value of each feature in the high-dimensional feature expression can be calculated by using a preset activation function, and the first activation function includes, but is not limited to, a sigmoid activation function, a relu activation function, and a softmax activation function.
For example, the high-dimensional feature expression includes a feature a, a feature B, and a feature C, and after the three features are calculated by using the activation function, the feature output value of the feature a is 80, the feature output value of the feature B is 70, and the feature output value of the feature C is 60, and the three features are averaged to obtain 70, which is the semantic clarity test result.
In the embodiment of the invention, the semantic definition inspection result obtained for each sentence in the question and answer text is calculated according to a preset calculation rule, for example, the semantic definition of the question and answer text can be obtained by averaging the inspection results in percentage.
In an optional embodiment of the present invention, the question and answer texts may be classified by using a clustering algorithm (e.g., K-Means clustering), so as to obtain question texts and answer texts, where the question texts include standard question texts and extended question texts corresponding to the standard question texts. The number of the expanded question texts which does not exceed a preset threshold (for example, 20) is counted, the ratio of the number of the expanded question texts which does not exceed the preset threshold to the number of the standard question texts, the direct similarity between the expanded question texts and the standard question texts and the number of the standard question texts can also be used as a judgment index of the health degree of the question-answering robot, so that the judgment of the health degree of the question-answering robot is more multi-aspect and multi-dimension.
S3, acquiring a human-computer interaction text of the question-answering robot, extracting a machine text in the human-computer interaction text, counting repeated texts in the machine text, and calculating the repetition rate of the human-computer interaction text according to the number of the repeated texts;
in the embodiment of the invention, the man-machine interaction text refers to text data obtained by interacting and recording with a user in the working process of the AI robot. The machine text is the contents output by the man-machine interaction parties and the AI robot during interaction.
In the embodiment of the invention, the method for acquiring the man-machine interaction text of the question-answering robot can be the same as the method for acquiring the question-answering text of the question-answering robot.
In the embodiment of the present invention, referring to fig. 4, the extracting a machine text from the human-computer interaction text and counting repeated texts in the machine text, and calculating a repetition rate of the human-computer interaction text according to the number of the repeated texts includes:
s31, classifying the human-computer interaction text by using a clustering algorithm to obtain a user text and the machine text;
s32, extracting repeated texts of the machine texts to obtain the number of the repeated texts;
and S33, taking the ratio of the number of the repeated texts to the number of the machine texts as the repetition rate of the man-machine interaction texts.
In the embodiment of the present invention, the clustering algorithm includes, but is not limited to, K-Means clustering, Gaussian Mixture Model (GMM) based maximum Expectation (EM) clustering, mean shift clustering.
For example, if the number of machine texts is 100, there is a repeated text A, B, C in the machine text, the number of a is 10, the number of B is 5, the number of C is 8, and the number of repeated texts is 20 +4+7, the repetition rate is 20/100 — 0.2.
In the embodiment of the invention, by acquiring the repeated text in the machine text, the repeated inquiry/answer situation caused by the fact that the corresponding answer cannot be searched or the voice text of the user cannot be acquired/analyzed and other problems can be judged when the question-answering robot replies to the user.
S4, selecting one of the human-computer interaction texts from the human-computer interaction texts one by one as a target text, calculating a matching value between the semantics of a question in the target text and the semantics of an answer corresponding to the question, and calculating a question-answer matching value of the interaction text according to the matching value;
in the embodiment of the invention, the interactive text refers to the question provided by the user in the interactive process with the question-answering robot, and the corresponding answer is the answer provided by the question-answering robot according to the question provided by the user.
In the embodiment of the present invention, please refer to fig. 5, the selecting one of the human-computer interaction texts one by one from the human-computer interaction texts as a target text, and calculating a matching value between semantics of a question in the target text and semantics of an answer corresponding to the question includes:
s41, extracting interactive texts from the human-computer interactive texts one by one according to the sequence relation to serve as target texts, wherein the interactive texts comprise questions and answers corresponding to the questions;
s42, extracting semantics of a question in the target text and semantics of an answer corresponding to the question;
s43, calculating the distance value of the semantics of the question and the semantics of the answer corresponding to the question, and calculating to obtain a matching value according to the distance value.
Further, the extracting semantics of the question in the target text and semantics of the answer corresponding to the question includes:
performing word segmentation processing on the question in the target text and the answer corresponding to the question respectively to obtain a first text word segmentation and a second text word segmentation;
performing vector conversion on the first text participle and the second text participle to obtain a first participle vector and a second participle vector;
respectively constructing vector subset sets of the first word segmentation vector and the second word segmentation vector, and respectively performing feature extraction on the vector subset sets of the first word segmentation vector and the second word segmentation vector by utilizing a pre-constructed semantic analysis model to obtain a first feature subset and a second feature subset;
and calculating a vector output value of each vector in the first feature subset and the second feature subset by using a preset second activation function, and respectively selecting the feature vectors of which the vector output values are greater than a preset threshold value as the semantics of the question in the target text and the answer corresponding to the question.
Specifically, a preset word vector conversion model may be used to perform vector conversion on the first text participle and the second text participle to obtain a participle vector, where the word vector conversion model includes, but is not limited to, a word2vec model and a CRF (Conditional Random Field) model.
In the embodiment of the invention, the vector subset set comprises all subsets of the participle vectors, and the vector subset set of the participle vectors is constructed, so that the diversity of analysis vector combinations is favorably improved, and the accuracy of the generated key semantics is further improved. For example, the word segmentation vector includes vector a, vector B, and vector C, and the vector subset set of the word segmentation vector includes: the vector comprises six subsets of [ vector A ], [ vector B ], [ vector C ], [ vector A, vector B ], [ vector A, vector C ], [ vector B, vector C ].
Further, the embodiment of the present invention may analyze the relevance between the analysis vectors in each vector subset of the vector subset set by using a pre-constructed semantic analysis model, so as to filter a representative feature subset from the vector subset set according to the relevance.
For example, the vector subset set with the first word segmentation vector includes a vector subset a, a vector subset B, and a vector subset C, and the semantic analysis model is used to analyze the relevance degrees of the word segmentation vectors in the vector subset a, the vector subset B, and the vector subset C respectively, so that the relevance degree of the word segmentation vector in the vector subset a is 80, the relevance degree of the word segmentation vector in the vector subset B is 70, and the relevance degree of the word segmentation vector in the vector subset C is 60, and then the vector subset a is determined to be the first feature subset of the problem in the target text.
In detail, after the first feature subset and the second feature subset are extracted, a preset activation function may be used to calculate a vector output value of each feature vector in the feature subsets, and a feature vector of which the vector output value is greater than a preset output threshold is selected as a key semantic of a question and an answer corresponding to the question in the target text, where the second activation function may be the same as the first activation function, and may also include, but is not limited to, a sigmoid activation function and a softmax activation function. Relu activates the function.
In the embodiment of the present invention, the semantics of the question in the target text and the semantics of the answer corresponding to the question are feature vectors obtained by performing semantic analysis, and the distance value between the feature vector of the question in the target text and the feature vector of the answer corresponding to the question is calculated, where the calculation formula is as follows:
Figure BDA0003286953680000111
and D is the distance value, R is a feature vector of a question in the target text, T is a feature vector of an answer corresponding to the question, and theta is a preset coefficient.
In the embodiment of the invention, the matching value is lower if the distance value is larger, and the matching value is higher if the distance value is smaller. For example, when there are feature vectors a and B of a question of two target texts and feature vectors C and D of an answer corresponding to a question of two target texts, and the feature vector of the question of the target text and the feature vector of the answer corresponding to the question of the target text are calculated by the above distance value algorithm, the distance value between the feature vector a and the feature vector C is 70, and the distance value between the feature vector B and the feature vector D is 40, the matching value between the question of the first target text and the corresponding answer may be 1- (70/100) ═ 0.3, and the matching value between the question of the second target text and the corresponding answer may be 1- (40/100) ═ 0.6.
S5, obtaining user scores, and calculating the poor rating rate of the user scores according to a preset rating rule;
in the embodiment of the invention, the user scoring means that after each service is finished, an evaluation page is displayed on a screen of the question-answering robot, the user is invited to score, and a scoring result can be used as the user scoring.
In the embodiment of the invention, computer sentences with data grabbing functions, such as java sentences, python sentences and the like, can be utilized. Pre-stored user ratings are grabbed from pre-constructed storage areas including, but not limited to, databases, blockchain nodes, network caches.
For example, three user scores a, B and C are obtained, where the user score a is 4 stars, the user score B is 2 stars, the user score C is 1 star, and a score smaller than a preset threshold (e.g., 4 stars) is a poor score, and then the two user scores of the user score B and the user score C are a poor score. The bad rating can be calculated by the ratio of the bad rating number to the user rating number, and the bad rating is 2/3-66.67%.
In the embodiment of the invention, the poor evaluation rate is an important index for evaluating the interactive performance of the question-answering robot, is an active evaluation of the user on the performance of the question-answering robot, and is an important reference for measuring the health degree/capability of the robot.
S6, calculating the error rate, the semantic clarity rate, the repetition rate, the question-answer matching value and the poor evaluation rate by using a preset weight algorithm to obtain the question-answer robot health degree score.
In the embodiment of the invention, the error rate, the repetition rate and the poor evaluation rate are negative indexes, the semantic clarity rate and the question-answer matching value are positive indexes, the positive indexes are negated to obtain the semantic unclear rate and the question-answer mismatching degree, and the positive indexes are changed into the negative indexes. The importance of the index can be evaluated through artificial calibration, and the index is set as a parameter of a preset weight algorithm according to the importance so as to calculate the health degree score of the question-answering robot.
In the embodiment of the present invention, the following weight algorithm may be used to calculate five evaluation indexes, i.e., the error rate, the semantic unclear rate, the repetition rate, the question-answer mismatching value, and the poor evaluation rate, so as to obtain a health score:
Figure BDA0003286953680000121
wherein G is the health score, n is the number of evaluation indices, QiIs the value of the ith evaluation index, PiIs the ith preset weight coefficient.
According to the embodiment of the invention, the question and answer text, the man-machine interaction text and the user evaluation of the question and answer robot are evaluated in multiple aspects to obtain the evaluation indexes of error rate, speech definition, repetition rate, question and answer matching degree and poor evaluation rate, so that the original linguistic data and the man-machine interaction state of the question and answer robot are comprehensively evaluated, the evaluation indexes are more diversified, and the evaluation result is more accurate. Therefore, the question-answering robot health degree evaluation method provided by the invention can solve the problem of inaccurate robot health degree evaluation.
Fig. 6 is a functional block diagram of a health assessment apparatus for a question-answering robot according to an embodiment of the present invention.
The question-answering robot health degree evaluation device 100 according to the present invention may be installed in an electronic device. According to the realized functions, the question-answering robot health degree evaluation device 100 can comprise an error rate evaluation module 101, a semantic definition rate evaluation module 102, a repetition rate evaluation module 103, a question-answering matching value evaluation module 104, a poor evaluation rate evaluation module 105 and a health degree scoring module 106. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the error rate evaluation module 101 is configured to obtain a question and answer text of a preset question and answer robot, perform wrongly written character inspection on the question and answer text, and calculate an error rate of the question and answer text according to the number of wrongly written characters;
the semantic definition rate evaluation module 102 is configured to extract text semantics of the question and answer text, perform semantic definition inspection on the text semantics, and calculate a semantic definition rate of the question and answer text according to a result of the semantic definition inspection;
the repetition rate evaluation module 103 is configured to obtain a human-computer interaction text of the question-answering robot, extract a machine text in the human-computer interaction text, count repeated texts in the machine text, and calculate a repetition rate of the human-computer interaction text according to the number of the repeated texts;
the question-answer matching value evaluation module 104 is configured to select one of the human-computer interaction texts as a target text one by one from the human-computer interaction texts, calculate a matching value between semantics of a question in the target text and semantics of an answer corresponding to the question, and calculate a question-answer matching value of the interaction text according to the matching value;
the poor rating evaluation module 105 is configured to obtain a user rating, and calculate a poor rating of the user rating according to a preset rating rule;
the health degree scoring module 106 is configured to calculate the error rate, the semantic clarity rate, the repetition rate, the question-answer matching value, and the poor evaluation rate by using a preset weighting algorithm, so as to obtain a health degree score of the question-answer robot.
In detail, when the modules in the health degree evaluation device 100 of the question-answering robot according to the embodiment of the present invention are used, the same technical means as the health degree evaluation method of the question-answering robot described in fig. 1 to 5 are adopted, and the same technical effects can be produced, which are not described herein again.
Fig. 7 is a schematic structural diagram of an electronic device for implementing a method for evaluating health of a question-answering robot according to an embodiment of the present invention.
The electronic device 1 may include a processor 10, a memory 11, a communication bus 12, and a communication interface 13, and may further include a computer program, such as a question-answering robot health assessment program, stored in the memory 11 and executable on the processor 10.
In some embodiments, the processor 10 may be composed of an integrated circuit, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same function or different functions, and includes one or more Central Processing Units (CPUs), a microprocessor, a digital Processing chip, a graphics processor, a combination of various control chips, and the like. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device by running or executing programs or modules (for example, executing a question answering robot health degree evaluation program and the like) stored in the memory 11 and calling data stored in the memory 11.
The memory 11 includes at least one type of readable storage medium including flash memory, removable hard disks, multimedia cards, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, for example a removable hard disk of the electronic device. The memory 11 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used to store not only application software installed in the electronic device and various types of data, such as codes of a robot health assessment program, but also temporarily store data that has been output or will be output.
The communication bus 12 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
The communication interface 13 is used for communication between the electronic device and other devices, and includes a network interface and a user interface. Optionally, the network interface may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), which are typically used to establish a communication connection between the electronic device and other electronic devices. The user interface may be a Display (Display), an input unit such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
Fig. 7 only shows an electronic device with components, and it will be understood by a person skilled in the art that the structure shown in fig. 7 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The question-answering robot health assessment program stored in the memory 11 of the electronic device 1 is a combination of a plurality of instructions, and when running in the processor 10, can realize:
acquiring a question and answer text of a preset question and answer robot, carrying out wrongly written character inspection on the question and answer text, and calculating the error rate of the question and answer text according to the number of wrongly written characters;
extracting text semantics of the question and answer text, performing semantic definition inspection on the text semantics, and calculating the semantic definition of the question and answer text according to the result of the semantic definition inspection;
acquiring a human-computer interaction text of the question-answering robot, extracting a machine text in the human-computer interaction text, counting repeated texts in the machine text, and calculating the repetition rate of the human-computer interaction text according to the number of the repeated texts;
selecting one of the human-computer interaction texts as a target text one by one from the human-computer interaction texts, calculating a matching value between the semantics of a question in the target text and the semantics of an answer corresponding to the question, and calculating a question-answer matching value of the interaction text according to the matching value;
obtaining user scores, and calculating the poor rating rate of the user scores according to a preset rating rule;
and calculating the error rate, the semantic clarity rate, the repetition rate, the question-answer matching value and the poor evaluation rate by using a preset weight algorithm to obtain the question-answer robot health degree score.
Specifically, the specific implementation method of the instruction by the processor 10 may refer to the description of the relevant steps in the embodiment corresponding to the drawings, which is not described herein again.
Further, the integrated modules/units of the electronic device 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. The computer readable storage medium may be volatile or non-volatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium, storing a computer program which, when executed by a processor of an electronic device, may implement:
acquiring a question and answer text of a preset question and answer robot, carrying out wrongly written character inspection on the question and answer text, and calculating the error rate of the question and answer text according to the number of wrongly written characters;
extracting text semantics of the question and answer text, performing semantic definition inspection on the text semantics, and calculating the semantic definition of the question and answer text according to the result of the semantic definition inspection;
acquiring a human-computer interaction text of the question-answering robot, extracting a machine text in the human-computer interaction text, counting repeated texts in the machine text, and calculating the repetition rate of the human-computer interaction text according to the number of the repeated texts;
selecting one of the human-computer interaction texts as a target text one by one from the human-computer interaction texts, calculating a matching value between the semantics of a question in the target text and the semantics of an answer corresponding to the question, and calculating a question-answer matching value of the interaction text according to the matching value;
obtaining user scores, and calculating the poor rating rate of the user scores according to a preset rating rule;
and calculating the error rate, the semantic clarity rate, the repetition rate, the question-answer matching value and the poor evaluation rate by using a preset weight algorithm to obtain the question-answer robot health degree score.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A question-answering robot health degree assessment method is characterized by comprising the following steps:
acquiring a question and answer text of a preset question and answer robot, carrying out wrongly written character inspection on the question and answer text, and calculating the error rate of the question and answer text according to the number of wrongly written characters;
extracting text semantics of the question and answer text, performing semantic definition inspection on the text semantics, and calculating the semantic definition of the question and answer text according to the result of the semantic definition inspection;
acquiring a human-computer interaction text of the question-answering robot, extracting a machine text in the human-computer interaction text, counting repeated texts in the machine text, and calculating the repetition rate of the human-computer interaction text according to the number of the repeated texts;
selecting one of the human-computer interaction texts as a target text one by one from the human-computer interaction texts, calculating a matching value between the semantics of a question in the target text and the semantics of an answer corresponding to the question, and calculating a question-answer matching value of the interaction text according to the matching value;
obtaining user scores, and calculating the poor rating rate of the user scores according to a preset rating rule;
and calculating the error rate, the semantic clarity rate, the repetition rate, the question-answer matching value and the poor evaluation rate by using a preset weight algorithm to obtain the question-answer robot health degree score.
2. The method for assessing the health degree of a question-answering robot according to claim 1, wherein the step of performing a wrongly written word test on the question-answering text and calculating the error rate of the question-answering text according to the number of wrongly written words comprises the steps of:
performing word segmentation processing on the question and answer text to obtain text word segmentation;
detecting the text participles by using a pre-constructed wrongly written character proofreading model to obtain a wrongly written character set;
and counting the number of wrongly-written characters in the wrongly-written character set, and calculating according to the number of wrongly-written characters and the number of the text participles to obtain the error rate of the question and answer text.
3. The method for assessing the health of a question-answering robot according to claim 2, wherein the extracting text semantics of the question-answering text and performing semantic clarity test on the text semantics comprises:
performing vector conversion on the text participles to obtain word vectors of the text participles;
carrying out weighted calculation on the word vector according to preset word segmentation weight to obtain a text vector;
and carrying out semantic definition inspection on the text vector by using a preset semantic processing model to obtain a semantic definition value.
4. The method for assessing the health of a question-answering robot according to claim 3, wherein the semantic clarity test of the text vector by using a preset semantic processing model to obtain a semantic clarity value comprises:
performing convolution and pooling on the text vector by using a preset semantic processing model to obtain low-dimensional feature expression of the text vector;
mapping the low-dimensional feature expression to a pre-constructed high-dimensional space by using a preset mapping function to obtain a high-dimensional feature expression of the text vector;
and calculating a feature output value of each feature in the high-dimensional feature expression by using a preset first activation function, and calculating according to the feature output value to obtain a semantic definition value.
5. The method for assessing the health of a question-answering robot according to claim 1, wherein the steps of extracting the machine texts in the human-computer interaction texts, counting the repeated texts in the machine texts, and calculating the repetition rate of the human-computer interaction texts according to the number of the repeated texts comprise:
classifying the human-computer interaction text by using a clustering algorithm to obtain a user text and a machine text;
extracting repeated texts of the machine texts to obtain the number of the repeated texts;
and taking the ratio of the number of repeated texts to the number of the machine texts as the repetition rate of the human-computer interaction texts.
6. The question-answering robot health assessment method according to any one of claims 1 to 5, wherein the step of selecting one of the human-computer interaction texts from the human-computer interaction texts one by one as a target text, and calculating a matching value between semantics of a question in the target text and semantics of an answer corresponding to the question comprises:
extracting interactive texts in the human-computer interactive texts one by one according to a sequential relation to serve as target texts, wherein the interactive texts comprise questions and answers corresponding to the questions;
extracting semantics of a question in the target text and semantics of an answer corresponding to the question;
and calculating a distance value of the semantics of the question and the semantics of the answer corresponding to the question, and calculating to obtain a matching value according to the distance value.
7. The question-answering robot health assessment method according to claim 6, wherein the extracting semantics of a question in the target text and semantics of an answer corresponding to the question comprises:
performing word segmentation processing on the question in the target text and the answer corresponding to the question respectively to obtain a first text word segmentation and a second text word segmentation;
performing vector conversion on the first text participle and the second text participle to obtain a first participle vector and a second participle vector;
respectively constructing vector subset sets of the first word segmentation vector and the second word segmentation vector, and respectively performing feature extraction on the vector subset sets of the first word segmentation vector and the second word segmentation vector by utilizing a pre-constructed semantic analysis model to obtain a first feature subset and a second feature subset;
and calculating a vector output value of each vector in the first feature subset and the second feature subset by using a preset second activation function, and respectively selecting the feature vectors of which the vector output values are greater than a preset threshold value as the semantics of the question in the target text and the answer corresponding to the question.
8. A question-answering robot health degree evaluation device, characterized by comprising:
the error rate evaluation module is used for acquiring a question and answer text of a preset question and answer robot, carrying out wrongly written character detection on the question and answer text and calculating the error rate of the question and answer text according to the number of wrongly written characters;
the semantic definition rate evaluation module is used for extracting the text semantics of the question and answer text, carrying out semantic definition inspection on the text semantics and calculating the semantic definition rate of the question and answer text according to the result of the semantic definition inspection;
the repetition rate evaluation module is used for acquiring a human-computer interaction text of the question-answering robot, extracting a machine text in the human-computer interaction text, counting repeated texts in the machine text, and calculating the repetition rate of the human-computer interaction text according to the number of the repeated texts;
the question-answer matching value evaluation module is used for selecting one of the human-computer interaction texts as a target text one by one from the human-computer interaction texts, calculating a matching value between the semantics of a question in the target text and the semantics of an answer corresponding to the question, and calculating the question-answer matching value of the interaction text according to the matching value;
the poor rating evaluation module is used for obtaining user scores and calculating the poor rating of the user scores according to a preset rating rule;
and the health degree scoring module is used for calculating the error rate, the semantic clarity rate, the repetition rate, the question-answer matching value and the poor evaluation rate by using a preset weight algorithm to obtain the health degree score of the question-answer robot.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, the computer program being executed by the at least one processor to enable the at least one processor to perform the question-answering robot health assessment method according to any one of claims 1 to 7.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the question-answering robot health assessment method according to any one of claims 1 to 7.
CN202111150154.5A 2021-09-29 2021-09-29 Question-answering robot health evaluation method, device, equipment and storage medium Active CN113887930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111150154.5A CN113887930B (en) 2021-09-29 2021-09-29 Question-answering robot health evaluation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111150154.5A CN113887930B (en) 2021-09-29 2021-09-29 Question-answering robot health evaluation method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113887930A true CN113887930A (en) 2022-01-04
CN113887930B CN113887930B (en) 2024-04-23

Family

ID=79008046

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111150154.5A Active CN113887930B (en) 2021-09-29 2021-09-29 Question-answering robot health evaluation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113887930B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299068A (en) * 2022-03-07 2022-04-08 中南大学湘雅医院 Intelligent decision-making system for evaluating psoriasis severity degree based on skin image
CN116450807A (en) * 2023-06-15 2023-07-18 中国标准化研究院 Massive data text information extraction method and system
CN117350302A (en) * 2023-11-04 2024-01-05 湖北为华教育科技集团有限公司 Semantic analysis-based language writing text error correction method, system and man-machine interaction device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100094814A1 (en) * 2008-10-13 2010-04-15 James Alexander Levy Assessment Generation Using the Semantic Web
US20140195226A1 (en) * 2013-01-04 2014-07-10 Electronics And Telecommunications Research Institute Method and apparatus for correcting error in speech recognition system
US10332508B1 (en) * 2016-03-31 2019-06-25 Amazon Technologies, Inc. Confidence checking for speech processing and query answering
US10388274B1 (en) * 2016-03-31 2019-08-20 Amazon Technologies, Inc. Confidence checking for speech processing and query answering
WO2021151271A1 (en) * 2020-05-20 2021-08-05 平安科技(深圳)有限公司 Method and apparatus for textual question answering based on named entities, and device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100094814A1 (en) * 2008-10-13 2010-04-15 James Alexander Levy Assessment Generation Using the Semantic Web
US20140195226A1 (en) * 2013-01-04 2014-07-10 Electronics And Telecommunications Research Institute Method and apparatus for correcting error in speech recognition system
US10332508B1 (en) * 2016-03-31 2019-06-25 Amazon Technologies, Inc. Confidence checking for speech processing and query answering
US10388274B1 (en) * 2016-03-31 2019-08-20 Amazon Technologies, Inc. Confidence checking for speech processing and query answering
WO2021151271A1 (en) * 2020-05-20 2021-08-05 平安科技(深圳)有限公司 Method and apparatus for textual question answering based on named entities, and device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘亮亮;曹存根;: "中文"非多字词错误"自动校对方法研究", 计算机科学, no. 10, 15 October 2016 (2016-10-15), pages 205 - 210 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299068A (en) * 2022-03-07 2022-04-08 中南大学湘雅医院 Intelligent decision-making system for evaluating psoriasis severity degree based on skin image
CN116450807A (en) * 2023-06-15 2023-07-18 中国标准化研究院 Massive data text information extraction method and system
CN116450807B (en) * 2023-06-15 2023-08-11 中国标准化研究院 Massive data text information extraction method and system
CN117350302A (en) * 2023-11-04 2024-01-05 湖北为华教育科技集团有限公司 Semantic analysis-based language writing text error correction method, system and man-machine interaction device
CN117350302B (en) * 2023-11-04 2024-04-02 湖北为华教育科技集团有限公司 Semantic analysis-based language writing text error correction method, system and man-machine interaction device

Also Published As

Publication number Publication date
CN113887930B (en) 2024-04-23

Similar Documents

Publication Publication Date Title
CN113887930B (en) Question-answering robot health evaluation method, device, equipment and storage medium
CN113312461A (en) Intelligent question-answering method, device, equipment and medium based on natural language processing
CN112883190A (en) Text classification method and device, electronic equipment and storage medium
CN113704429A (en) Semi-supervised learning-based intention identification method, device, equipment and medium
CN115392237B (en) Emotion analysis model training method, device, equipment and storage medium
CN113807973A (en) Text error correction method and device, electronic equipment and computer readable storage medium
CN113821622A (en) Answer retrieval method and device based on artificial intelligence, electronic equipment and medium
CN114781832A (en) Course recommendation method and device, electronic equipment and storage medium
CN115309864A (en) Intelligent sentiment classification method and device for comment text, electronic equipment and medium
CN114840684A (en) Map construction method, device and equipment based on medical entity and storage medium
CN114862140A (en) Behavior analysis-based potential evaluation method, device, equipment and storage medium
CN114220536A (en) Disease analysis method, device, equipment and storage medium based on machine learning
CN114706961A (en) Target text recognition method, device and storage medium
CN113344125A (en) Long text matching identification method and device, electronic equipment and storage medium
CN111445271A (en) Model generation method, and prediction method, system, device and medium for cheating hotel
CN115099680B (en) Risk management method, apparatus, device and storage medium
CN115510188A (en) Text keyword association method, device, equipment and storage medium
CN114595321A (en) Question marking method and device, electronic equipment and storage medium
CN115146064A (en) Intention recognition model optimization method, device, equipment and storage medium
CN114708073A (en) Intelligent detection method and device for surrounding mark and serial mark, electronic equipment and storage medium
CN113808616A (en) Voice compliance detection method, device, equipment and storage medium
CN113157677A (en) Data filtering method and device based on trust behaviors
CN112632264A (en) Intelligent question and answer method and device, electronic equipment and storage medium
CN111680513B (en) Feature information identification method and device and computer readable storage medium
CN114880449B (en) Method and device for generating answers of intelligent questions and answers, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant