CN116628161A - Answer generation method, device, equipment and storage medium - Google Patents

Answer generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN116628161A
CN116628161A CN202310593894.9A CN202310593894A CN116628161A CN 116628161 A CN116628161 A CN 116628161A CN 202310593894 A CN202310593894 A CN 202310593894A CN 116628161 A CN116628161 A CN 116628161A
Authority
CN
China
Prior art keywords
text
answer
vector
question
decoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310593894.9A
Other languages
Chinese (zh)
Inventor
张镛
王健宗
程宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202310593894.9A priority Critical patent/CN116628161A/en
Publication of CN116628161A publication Critical patent/CN116628161A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention relates to the field of artificial intelligence and digital medical treatment, and provides an answer generation method, an answer generation device, answer generation equipment and a storage medium. According to the method, an input text is generated according to a problem to be detected, a problem type, an evidence text and a historical answer, the input text is subjected to coding processing to obtain a text code, the text code is subjected to sensing processing to obtain a sensing vector, normalization processing is performed to obtain a first output probability, decoding processing is performed on the sensing vector and the input text based on the text code to obtain decoding information, prediction is performed on the decoding information to obtain a second output probability, and a problem answer is generated according to the first output probability and the second output probability. The method can accurately output the answers to the questions by using the neural network. In addition, the invention also relates to a blockchain technology, and answers to the questions can be stored in the blockchain.

Description

Answer generation method, device, equipment and storage medium
Technical Field
The present invention relates to the field of artificial intelligence and digital medical technology, and in particular, to an answer generating method, apparatus, device and storage medium.
Background
With the development of artificial intelligence, automatic question-answering systems are developed, and the current automatic question-answering systems can be applied to the medical field. In the existing automatic question-answering system, corresponding answers are usually retrieved from evidence texts directly according to questions, and answer formats meeting the needs of the questions cannot be given in this way, for example, for a counting question, only enumerated description information may exist in the evidence texts, so that the existing automatic question-answering system cannot generate reasonable answer information.
Therefore, how to accurately and reasonably generate answer texts corresponding to the questions becomes a technical problem to be solved.
Disclosure of Invention
In view of the foregoing, it is necessary to provide an answer generation method, apparatus, device and storage medium, which can solve the technical problem of how to accurately and reasonably generate answer text corresponding to the problem.
In one aspect, the present invention provides an answer generation method, where the answer generation method includes:
generating an input text according to the acquired to-be-tested problem, the problem type of the to-be-tested problem, the evidence text corresponding to the to-be-tested problem and the historical answer corresponding to the historical problem associated with the to-be-tested problem;
acquiring an answer recognition model which is trained in advance, wherein the answer recognition model comprises a plurality of encoders, a perception network layer, a plurality of decoders and a prediction output layer;
encoding the input text based on the plurality of encoders to obtain text codes;
performing sensing processing on the text codes based on the sensing network layer to obtain sensing vectors, and performing normalization processing on the sensing vectors to obtain a first output probability of each text vocabulary in the input text;
Decoding the perception vector and the input text based on the plurality of decoders and the text code to obtain decoding information;
predicting the decoding information based on the prediction output layer to obtain a second output probability of each template vocabulary in the plurality of decoders;
and generating a question answer of the to-be-tested question according to the first output probability and the second output probability.
According to a preferred embodiment of the present invention, the generating the input text according to the acquired to-be-tested question, the question type of the to-be-tested question, the evidence text corresponding to the to-be-tested question, and the historical answer corresponding to the historical question associated with the to-be-tested question includes:
extracting the query words in the to-be-tested problem;
identifying the problem type according to the matching of the query vocabulary and a preset vocabulary;
identifying the generation time and the problem time of the problem to be detected;
acquiring a request problem according to the generation time and the problem scene, and acquiring the request time of the request problem;
screening the historical problems from the request problems according to the request time;
obtaining answer texts corresponding to the historical questions as the historical answers;
And splicing the question type, the question to be detected, a preset identifier, the evidence text and the historical answer to be used as the input text.
According to a preferred embodiment of the present invention, the encoding the input text based on the plurality of encoders includes:
characterizing each text vocabulary in the input text to obtain a text vector;
for any encoder, generating a vector of interest for the text vector based on each set of weight matrices;
splicing a plurality of the attention vectors to obtain a spliced vector;
calculating the product of the configuration matrix and the splicing vector to obtain the attention information;
performing full-connection operation on the attention information according to a preset matrix and a preset bias value to obtain an initial vector, wherein the generation formula of the initial vector is as follows: y=max (a, xW 1 + 1 ) 2 + 2 Wherein y represents the initial vector, a represents a preset constant, x represents the attention information, W 1 W and W 2 Representing the preset matrix, b 1 B 2 Representing the preset bias value;
and taking the initial vector as a text vector of the next encoder to carry out encoding processing until the plurality of encoders participate in encoding, so as to obtain the text encoding.
According to a preferred embodiment of the present invention, the performing, based on the perceptual network layer, perceptual processing on the text code, to obtain a perceptual vector includes:
for each hidden layer of the perception network layer, performing full-connection operation on the text codes based on network parameters of neurons in the hidden layer to obtain an output vector of the hidden layer;
and taking the output vector as a text code of the next hidden layer to perform perception processing until each hidden layer of the perception network layer participates in processing to obtain the perception vector.
According to a preferred embodiment of the present invention, the decoding the perceptual vector and the input text based on the plurality of decoders and the text encoding, to obtain decoding information includes:
masking the input text to obtain a masking vector;
generating a target vector according to the perception vector and the mask vector;
for any decoder, performing attention analysis on the target vector based on the any decoder and the text code to obtain attention information;
and performing full connection processing on the attention information based on the decoding weight matrix and the decoding bias of any decoder to obtain initial decoding, and performing decoding processing on the initial decoding serving as a target vector of the next decoder until the plurality of decoders participate in decoding to obtain the decoding information.
According to a preferred embodiment of the present invention, the predicting the decoding information based on the prediction output layer, to obtain the second output probability of each template vocabulary in the plurality of decoders includes:
activating the decoding information based on an activating function of the prediction output layer to obtain activating information;
and normalizing the activation information to obtain the second output probability.
According to a preferred embodiment of the present invention, the generating the answer to the question to be tested according to the first output probability and the second output probability includes:
performing weighted sum operation on the first output probability and the second output probability to obtain target probabilities of the text vocabulary and the template vocabulary;
determining the vocabulary with the target probability larger than a preset probability threshold as a target vocabulary;
and generating the answer to the question according to the target vocabulary.
On the other hand, the invention also provides an answer generating device, which comprises:
the generating unit is used for generating an input text according to the acquired to-be-detected problem, the problem type of the to-be-detected problem, the evidence text corresponding to the to-be-detected problem and the historical answer corresponding to the historical problem associated with the to-be-detected problem;
The system comprises an acquisition unit, a prediction output layer and a prediction output layer, wherein the acquisition unit is used for acquiring a pre-trained answer recognition model, and the answer recognition model comprises a plurality of encoders, a perception network layer, a plurality of decoders and the prediction output layer;
the encoding unit is used for encoding the input text based on the plurality of encoders to obtain text codes;
the sensing unit is used for sensing the text codes based on the sensing network layer to obtain sensing vectors, and normalizing the sensing vectors to obtain a first output probability of each text vocabulary in the input text;
the decoding unit is used for decoding the perception vector and the input text based on the plurality of decoders and the text codes to obtain decoding information;
the prediction unit is used for predicting the decoding information based on the prediction output layer to obtain a second output probability of each template vocabulary in the plurality of decoders;
the generating unit is further configured to generate a question answer of the to-be-tested question according to the first output probability and the second output probability.
In another aspect, the present invention also proposes an electronic device, including:
A memory storing computer readable instructions; a kind of electronic device with high-pressure air-conditioning system
And a processor executing computer readable instructions stored in the memory to implement the answer generation method.
In another aspect, the present application also proposes a computer readable storage medium having stored therein computer readable instructions that are executed by a processor in an electronic device to implement the answer generation method.
According to the technical scheme, the input text is generated by combining the to-be-detected question, the question type, the evidence text and the historical answer, and the answer formats corresponding to the questions of different types are different, so that the generation accuracy and the generation rationality of the question answer can be improved by adding the question type into the input text, the subsequently generated question answer can be more complete by adding the historical answer into the input text, the answer recognition model can be guided to conform to the question answer corresponding to the question type, the text codes can pay attention to semantic information of different granularities from a shallow layer to a deep layer respectively by setting the encoders in the answer recognition model, and the question answer can be accurately generated by the first output probability and the second output probability, so that the generation error of the question answer or redundant information is avoided.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of the answer generation method of the invention.
Fig. 2 is a network configuration diagram of an answer identification model in the answer generation method of the invention.
FIG. 3 is a functional block diagram of a preferred embodiment of the answer generating device of the invention.
Fig. 4 is a schematic structural diagram of an electronic device according to a preferred embodiment of the present invention for implementing the answer generation method.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
FIG. 1 is a flow chart of an answer generation method according to a preferred embodiment of the invention. The order of the steps in the flowchart may be changed and some steps may be omitted according to various needs.
The answer generation method can acquire and process related data based on artificial intelligence technology. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The answer generation method is applied to one or more electronic devices, wherein the electronic devices are devices capable of automatically performing numerical calculation and/or information processing according to preset or stored computer readable instructions, and the hardware of the electronic devices comprises, but is not limited to, microprocessors, application specific integrated circuits (Application Specific Integrated Circuit, ASICs), programmable gate arrays (Field-Programmable Gate Array, FPGA), digital signal processors (Digital Signal Processor, DSP), embedded devices and the like.
The electronic device may be any electronic product that can interact with a user in a human-computer manner, such as a personal computer, tablet computer, smart phone, personal digital assistant (Personal Digital Assistant, PDA), game console, interactive internet protocol television (Internet Protocol Television, IPTV), smart wearable device, etc.
The electronic device may comprise a network device and/or a user device. Wherein the network device includes, but is not limited to, a single network electronic device, a group of electronic devices made up of multiple network electronic devices, or a Cloud based Cloud Computing (Cloud Computing) made up of a large number of hosts or network electronic devices.
The network in which the electronic device is located includes, but is not limited to: the internet, wide area networks, metropolitan area networks, local area networks, virtual private networks (Virtual Private Network, VPN), etc.
101, generating an input text according to the acquired to-be-tested question, the question type of the to-be-tested question, the evidence text corresponding to the to-be-tested question and the historical answer corresponding to the historical question associated with the to-be-tested question.
In at least one embodiment of the present application, the problem to be tested may be a problem in english or a problem in chinese, which is not limited by the present application. The questions to be tested may be user questions of any round in an automatic question and answer system. For example, the problem to be measured may be a counting problem. The problem to be tested may be a medical related problem.
The question type refers to a category corresponding to the question to be tested, for example, the question type may be a type or not.
The evidence text refers to a source text of an answer corresponding to the to-be-tested question, for example, the to-be-tested question is a counting question, and the evidence text may include a specific enumeration value and the like. The evidence text can be medical text, and the medical text can be medical electronic records (Electronic Healthcare Record), and electronic personal health records comprise a series of electronic records with preservation and backup values such as medical records, electrocardiography, medical images and the like.
The historical problems are problems in the same question-answer time with the problems to be detected, and the request time of the historical problems is smaller than the generation time of the problems to be detected.
The historical answers refer to answer information corresponding to the historical questions.
The input text refers to a text generated after the questions to be tested, the question types, the evidence text and the historical answers are spliced.
In at least one embodiment of the present invention, the generating, by the electronic device, an input text according to the acquired to-be-tested question, the question type of the to-be-tested question, the evidence text corresponding to the to-be-tested question, and the historical answer corresponding to the historical question associated with the to-be-tested question includes:
extracting the query words in the to-be-tested problem;
identifying the problem type according to the matching of the query vocabulary and a preset vocabulary;
identifying the generation time and the problem time of the problem to be detected;
acquiring a request problem according to the generation time and the problem scene, and acquiring the request time of the request problem;
screening the historical problems from the request problems according to the request time;
obtaining answer texts corresponding to the historical questions as the historical answers;
And splicing the question type, the question to be detected, a preset identifier, the evidence text and the historical answer to be used as the input text.
The query term refers to a term representing a query in the question to be tested, for example, the query term may be what, how, is, are, etc.
The preset vocabulary includes, but is not limited to: the words corresponding to the plurality of categories related to the problem may be, for example, what, how, is, are, etc.
The question type refers to a category corresponding to a preset vocabulary successfully matched with the query vocabulary.
The generating time refers to a specific time point of generating the to-be-detected problem, the problem order may be a specific question order in an automatic question and answer system, for example, the problem order may be 1001 or the like.
The question and answer times of the requested questions are the same as the question times, and the request time of the requested questions is smaller than the generation time.
The history problem refers to a request problem that the request time is in a target period, wherein the target period may be generated according to a difference between the generation time and a preset period, for example, the generation time is 9:00, the preset time period is [1min,5min ], the target time period is 8:55-8:59.
The preset identifier may be any identifier capable of separating the problem to be detected from the evidence text, for example, the preset identifier may be [ SEP ], etc.
The problem type can be accurately identified through the query vocabulary, the history problem can be reasonably screened out by combining the generation time and the problem scene, the problem to be detected and the evidence text can be separated through the preset mark, so that the recognition difficulty of a model is reduced, the input text is generated by combining the problem type, the problem to be detected, the preset mark, the evidence text and the history answer, and the generated problem answer can be in accordance with the answer format of the problem type and can be more complete.
102, obtaining a pre-trained answer recognition model, wherein the answer recognition model comprises a plurality of encoders, a perception network layer, a plurality of decoders and a prediction output layer.
In at least one embodiment of the present invention, the answer identification model is used to identify a question answer of the question to be tested. The answer recognition model can be applied to an intelligent question-answering system, and the intelligent question-answering system can be applied to intelligent diagnosis and treatment and remote consultation.
The plurality of encoders are for generating text encodings of the input text.
The perceptual network layer is configured to generate a perceptual vector of the text encoding.
The plurality of decoders are used for generating decoding information of the text codes, and the number of the plurality of decoders is equal to the number of the plurality of encoders.
The prediction output layer is used for generating the probability of the decoding information on each template vocabulary.
In at least one embodiment of the present invention, before obtaining the pre-trained answer identification model, the method further comprises:
constructing an answer identification network;
obtaining a training sample;
calculating a loss value of the training sample on the answer identification network based on a preset cross entropy loss function;
and adjusting preset parameters of the answer identification network based on the loss value until the loss value is not reduced any more, so as to obtain the answer identification model.
As shown in fig. 2, a network structure diagram of an answer identification model in the answer generation method of the invention is shown. In fig. 2, the answer identification model includes an encoder 1, an encoder 2, a perceptual network layer, a decoder 1, a decoder 2, and a prediction output layer.
103, coding the input text based on the plurality of encoders to obtain text codes.
In at least one embodiment of the present invention, the text encoding refers to vector information output by an encoder having a largest splicing order among the plurality of encoders based on the input text.
In at least one embodiment of the present invention, the electronic device performing encoding processing on the input text based on the plurality of encoders, and obtaining text codes includes:
characterizing each text vocabulary in the input text to obtain a text vector;
for any encoder, generating a vector of interest for the text vector based on each set of weight matrices;
splicing a plurality of the attention vectors to obtain a spliced vector;
calculating the product of the configuration matrix and the splicing vector to obtain the attention information;
performing full-connection operation on the attention information according to a preset matrix and a preset bias value to obtain an initial vector, wherein the generation formula of the initial vector is as follows: y=max (a, xW 1 + 1 ) 2 + 2 Wherein y represents the initial vector, a represents a preset constant, x represents the attention information, W 1 W and W 2 Representing the preset matrix, b 1 B 2 Representing the preset bias value;
and taking the initial vector as a text vector of the next encoder to carry out encoding processing until the plurality of encoders participate in encoding, so as to obtain the text encoding.
The text vocabulary can be generated after the input text is segmented based on a preset dictionary.
Each set of weight matrices may include a plurality of matrices. The attention vector is generated by analyzing attention of the text vector based on the matrixes.
The weight matrix, the configuration matrix, the preset matrix and the preset bias value may be obtained from model parameters of the answer identification model.
According to the embodiment, in each encoder, attention analysis is performed on the text vectors through a plurality of groups of weight matrixes, so that the characterization accuracy of the attention information on the input text can be improved, and the characterization capability of the initial vectors is further improved by analyzing the attention information through combining the preset matrixes and the preset offset values, so that the characterization capability of the text codes is improved.
104, performing sensing processing on the text codes based on the sensing network layer to obtain sensing vectors, and performing normalization processing on the sensing vectors to obtain a first output probability of each text vocabulary in the input text.
In at least one embodiment of the present invention, the perceptual vector includes a token vector for each text word in the input text.
The first output probability refers to a probability generated for a plurality of the text words in combination with the plurality of encoders and the perceptual network layer.
In at least one embodiment of the present invention, the electronic device performing, based on the perceptual network layer, perceptual processing on the text encoding, to obtain a perceptual vector, including:
for each hidden layer of the perception network layer, performing full-connection operation on the text codes based on network parameters of neurons in the hidden layer to obtain an output vector of the hidden layer;
and taking the output vector as a text code of the next hidden layer to perform perception processing until each hidden layer of the perception network layer participates in processing to obtain the perception vector.
Wherein the network parameters include a parameter matrix and a parameter bias value.
The representation capability of the perception vector can be improved through the processing of the text coding by the neurons of all hidden layers in the perception network layer.
And 105, decoding the perception vector and the input text based on the plurality of decoders and the text codes to obtain decoding information.
In at least one embodiment of the present invention, the decoding information refers to information output from a decoder having a largest splicing order among the plurality of decoders.
In at least one embodiment of the present application, the electronic device performing decoding processing on the perceptual vector and the input text based on the plurality of decoders and the text encoding, to obtain decoding information includes:
masking the input text to obtain a masking vector;
generating a target vector according to the perception vector and the mask vector;
for any decoder, performing attention analysis on the target vector based on the any decoder and the text code to obtain attention information;
and performing full connection processing on the attention information based on the decoding weight matrix and the decoding bias of any decoder to obtain initial decoding, and performing decoding processing on the initial decoding serving as a target vector of the next decoder until the plurality of decoders participate in decoding to obtain the decoding information.
The target vector is a vector generated according to an element average value of each vector element in the perception vector and each vector element in the mask vector.
The generation mode of the initial decoding is similar to the generation mode of the initial vector, and the application will not be repeated.
By masking the input text, decoding of the text code by the decoders depending on future information can be avoided, and by combining the perception vector and the mask vector, accuracy of attention analysis of the text code on the target vector can be improved, thereby improving accuracy of generation of the decoded information.
Specifically, the electronic device performs mask processing on the input text, and obtaining a mask vector includes:
extracting word vectors of each text word in the input text from the text vectors;
counting the element number of vector elements in each vocabulary vector;
performing alignment compensation processing on a plurality of vocabulary vectors based on the element number with the maximum value to obtain alignment compensation vectors;
generating a vector matrix according to a plurality of the complement vectors;
identifying a matrix diagonal of the vector matrix, and identifying a lower triangle element of the vector matrix according to the matrix diagonal;
and carrying out mask processing on the lower triangle element, and determining a matrix obtained after the mask processing on the lower triangle element as the mask vector.
By the above embodiment, not only the vector length of each text vocabulary is ensured to be the same, but also the decoding of the text codes by the decoders depending on future information can be avoided.
And 106, predicting the decoding information based on the prediction output layer to obtain a second output probability of each template vocabulary in the plurality of decoders.
In at least one embodiment of the present invention, the plurality of template vocabularies refer to vocabularies configured in advance in the answer recognition model.
The second output probability refers to a probability generated for each template vocabulary based on the prediction output layer and the decoding information.
In at least one embodiment of the present invention, the electronic device predicting the decoding information based on the prediction output layer, and obtaining the second output probability of each template vocabulary in the plurality of decoders includes:
activating the decoding information based on an activating function of the prediction output layer to obtain activating information;
and normalizing the activation information to obtain the second output probability.
By the implementation mode, the decoding information can be accurately mapped to a plurality of template words, and the accuracy of the second output probability is improved.
107, generating a question answer of the question to be tested according to the first output probability and the second output probability.
It should be emphasized that, to further ensure the privacy and security of the answers to the questions, the answers to the questions may also be stored in nodes of a blockchain.
In at least one embodiment of the present invention, the answer to the question refers to an answer text corresponding to the question to be tested. When the question to be tested is a question input by a user in an automatic question-answering system, the answer to the question may be an answer output by the automatic question-answering system.
In at least one embodiment of the present invention, the generating, by the electronic device, the answer to the question to be tested according to the first output probability and the second output probability includes:
performing weighted sum operation on the first output probability and the second output probability to obtain target probabilities of the text vocabulary and the template vocabulary;
determining the vocabulary with the target probability larger than a preset probability threshold as a target vocabulary;
and generating the answer to the question according to the target vocabulary.
The preset probability threshold can be set according to actual requirements.
By combining the first output probability and the second output probability, the target probability can be accurately generated, so that the accuracy of the answers to the questions is improved.
According to the technical scheme, the input text is generated by combining the to-be-detected question, the question type, the evidence text and the historical answer, and the answer formats corresponding to the questions of different types are different, so that the generation accuracy and the generation rationality of the question answer can be improved by adding the question type into the input text, the subsequently generated question answer can be more complete by adding the historical answer into the input text, the answer recognition model can be guided to conform to the question answer corresponding to the question type, the text codes can pay attention to semantic information of different granularities from a shallow layer to a deep layer respectively by setting the encoders in the answer recognition model, and the question answer can be accurately generated by the first output probability and the second output probability, so that the generation error of the question answer or redundant information is avoided.
FIG. 3 is a functional block diagram of a preferred embodiment of the answer generation device of the application. The answer generating device 11 includes a generating unit 110, an acquiring unit 111, an encoding unit 112, a sensing unit 113, a decoding unit 114, a prediction unit 115, a construction unit 116, a calculation unit 117, and an adjustment unit 118. The module/unit referred to herein is a series of computer readable instructions capable of being retrieved by the processor 13 and performing a fixed function and stored in the memory 12. In the present embodiment, the functions of the respective modules/units will be described in detail in the following embodiments.
A generating unit 110, configured to generate an input text according to the acquired question to be tested, a question type of the question to be tested, a evidence text corresponding to the question to be tested, and a historical answer corresponding to a historical question associated with the question to be tested;
an obtaining unit 111, configured to obtain a pre-trained answer identification model, where the answer identification model includes a plurality of encoders, a perception network layer, a plurality of decoders, and a prediction output layer;
an encoding unit 112, configured to perform encoding processing on the input text based on the plurality of encoders, so as to obtain text encoding;
the sensing unit 113 is configured to perform sensing processing on the text code based on the sensing network layer to obtain a sensing vector, and perform normalization processing on the sensing vector to obtain a first output probability of each text vocabulary in the input text;
a decoding unit 114, configured to decode the perceptual vector and the input text based on the plurality of decoders and the text encoding, to obtain decoded information;
a prediction unit 115, configured to predict the decoding information based on the prediction output layer, to obtain a second output probability of each template vocabulary in the plurality of decoders;
The generating unit 110 is further configured to generate a question answer of the question to be tested according to the first output probability and the second output probability.
In at least one embodiment of the present invention, the generating unit 110 is further configured to extract a query term in the question to be tested;
identifying the problem type according to the matching of the query vocabulary and a preset vocabulary;
identifying the generation time and the problem time of the problem to be detected;
acquiring a request problem according to the generation time and the problem scene, and acquiring the request time of the request problem;
screening the historical problems from the request problems according to the request time;
obtaining answer texts corresponding to the historical questions as the historical answers;
and splicing the question type, the question to be detected, a preset identifier, the evidence text and the historical answer to be used as the input text.
In at least one embodiment of the present invention, the encoding unit 112 is further configured to characterize each text vocabulary in the input text to obtain a text vector;
for any encoder, generating a vector of interest for the text vector based on each set of weight matrices;
Splicing a plurality of the attention vectors to obtain a spliced vector;
calculating the product of the configuration matrix and the splicing vector to obtain the attention information;
according to a preset matrix and a presetSetting an offset value, and performing full-connection operation on the attention information to obtain an initial vector, wherein the generation formula of the initial vector is as follows: y=max (a, xW 1 + 1 ) 2 + 2 Wherein y represents the initial vector, a represents a preset constant, x represents the attention information, W 1 W and W 2 Representing the preset matrix, b 1 B 2 Representing the preset bias value;
and taking the initial vector as a text vector of the next encoder to carry out encoding processing until the plurality of encoders participate in encoding, so as to obtain the text encoding.
In at least one embodiment of the present invention, the sensing unit 113 is further configured to, for each hidden layer of the sensing network layer, perform a full-connection operation on the text code based on network parameters of neurons in the hidden layer, to obtain an output vector of the hidden layer;
and taking the output vector as a text code of the next hidden layer to perform perception processing until each hidden layer of the perception network layer participates in processing to obtain the perception vector.
In at least one embodiment of the present invention, the decoding unit 114 is further configured to perform a masking process on the input text to obtain a mask vector;
generating a target vector according to the perception vector and the mask vector;
for any decoder, performing attention analysis on the target vector based on the any decoder and the text code to obtain attention information;
and performing full connection processing on the attention information based on the decoding weight matrix and the decoding bias of any decoder to obtain initial decoding, and performing decoding processing on the initial decoding serving as a target vector of the next decoder until the plurality of decoders participate in decoding to obtain the decoding information.
In at least one embodiment of the present invention, the prediction unit 115 is further configured to perform activation processing on the decoded information based on an activation function of the prediction output layer, to obtain activation information;
and normalizing the activation information to obtain the second output probability.
In at least one embodiment of the present invention, the generating unit 110 is further configured to perform a weighted sum operation on the first output probability and the second output probability to obtain target probabilities of the text vocabulary and the template vocabulary;
Determining the vocabulary with the target probability larger than a preset probability threshold as a target vocabulary;
and generating the answer to the question according to the target vocabulary.
In at least one embodiment of the present application, before acquiring the answer recognition model that is pre-trained, a construction unit 116 is configured to construct an answer recognition network;
the acquiring unit 111 is further configured to acquire a training sample;
a calculating unit 117 for calculating a loss value of the training sample on the answer recognition network based on a preset cross entropy loss function;
and the adjusting unit 118 is configured to adjust preset parameters of the answer identifying network based on the loss value until the loss value is no longer reduced, thereby obtaining the answer identifying model.
According to the technical scheme, the input text is generated by combining the to-be-detected question, the question type, the evidence text and the historical answer, and the answer formats corresponding to the questions of different types are different, so that the generation accuracy and the generation rationality of the question answer can be improved by adding the question type into the input text, the subsequently generated question answer can be more complete by adding the historical answer into the input text, the answer recognition model can be guided to conform to the question answer corresponding to the question type, the text codes can pay attention to semantic information of different granularities from a shallow layer to a deep layer respectively by setting the encoders in the answer recognition model, and the question answer can be accurately generated by the first output probability and the second output probability, so that the generation error of the question answer or redundant information is avoided.
Fig. 4 is a schematic structural diagram of an electronic device according to a preferred embodiment of the present invention for implementing the answer generation method.
In one embodiment of the invention, the electronic device 1 includes, but is not limited to, a memory 12, a processor 13, and computer readable instructions, such as an answer generation program, stored in the memory 12 and executable on the processor 13.
It will be appreciated by those skilled in the art that the schematic diagram is merely an example of the electronic device 1 and does not constitute a limitation of the electronic device 1, and may include more or less components than illustrated, or may combine certain components, or different components, e.g. the electronic device 1 may further include input-output devices, network access devices, buses, etc.
The processor 13 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor, etc., and the processor 13 is an operation core and a control center of the electronic device 1, connects various parts of the entire electronic device 1 using various interfaces and lines, and executes an operating system of the electronic device 1 and various installed applications, program codes, etc.
Illustratively, the computer readable instructions may be partitioned into one or more modules/units that are stored in the memory 12 and executed by the processor 13 to complete the present invention. The one or more modules/units may be a series of computer readable instructions capable of performing a specific function, the computer readable instructions describing a process of executing the computer readable instructions in the electronic device 1. For example, the computer-readable instructions may be divided into a generating unit 110, an acquiring unit 111, an encoding unit 112, a sensing unit 113, a decoding unit 114, a prediction unit 115, a constructing unit 116, a calculating unit 117, and an adjusting unit 118.
The memory 12 may be used to store the computer readable instructions and/or modules, and the processor 13 may implement various functions of the electronic device 1 by executing or executing the computer readable instructions and/or modules stored in the memory 12 and invoking data stored in the memory 12. The memory 12 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to the use of the electronic device, etc. Memory 12 may include non-volatile and volatile memory, such as: a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other storage device.
The memory 12 may be an external memory and/or an internal memory of the electronic device 1. Further, the memory 12 may be a physical memory, such as a memory bank, a TF Card (Trans-flash Card), or the like.
The integrated modules/units of the electronic device 1 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the present invention may also be implemented by implementing all or part of the processes in the methods of the embodiments described above, by instructing the associated hardware by means of computer readable instructions, which may be stored in a computer readable storage medium, the computer readable instructions, when executed by a processor, implementing the steps of the respective method embodiments described above.
Wherein the computer readable instructions comprise computer readable instruction code which may be in the form of source code, object code, executable files, or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer readable instruction code, a recording medium, a USB flash disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory).
The blockchain is a novel application mode of computer technologies such as distributed answer generation, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The Blockchain (Blockchain), which is essentially a decentralised database, is a string of data blocks that are generated by cryptographic means in association, each data block containing a batch of information of network transactions for verifying the validity of the information (anti-counterfeiting) and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
In connection with fig. 1, the memory 12 in the electronic device 1 stores computer readable instructions implementing an answer generation method, the processor 13 being executable to implement:
generating an input text according to the acquired to-be-tested problem, the problem type of the to-be-tested problem, the evidence text corresponding to the to-be-tested problem and the historical answer corresponding to the historical problem associated with the to-be-tested problem;
acquiring an answer recognition model which is trained in advance, wherein the answer recognition model comprises a plurality of encoders, a perception network layer, a plurality of decoders and a prediction output layer;
Encoding the input text based on the plurality of encoders to obtain text codes;
performing sensing processing on the text codes based on the sensing network layer to obtain sensing vectors, and performing normalization processing on the sensing vectors to obtain a first output probability of each text vocabulary in the input text;
decoding the perception vector and the input text based on the plurality of decoders and the text code to obtain decoding information;
predicting the decoding information based on the prediction output layer to obtain a second output probability of each template vocabulary in the plurality of decoders;
and generating a question answer of the to-be-tested question according to the first output probability and the second output probability.
In particular, the specific implementation method of the processor 13 on the computer readable instructions may refer to the description of the relevant steps in the corresponding embodiment of fig. 1, which is not repeated herein.
In the several embodiments provided in the present invention, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The computer readable storage medium has stored thereon computer readable instructions, wherein the computer readable instructions when executed by the processor 13 are configured to implement the steps of:
generating an input text according to the acquired to-be-tested problem, the problem type of the to-be-tested problem, the evidence text corresponding to the to-be-tested problem and the historical answer corresponding to the historical problem associated with the to-be-tested problem;
acquiring an answer recognition model which is trained in advance, wherein the answer recognition model comprises a plurality of encoders, a perception network layer, a plurality of decoders and a prediction output layer;
encoding the input text based on the plurality of encoders to obtain text codes;
performing sensing processing on the text codes based on the sensing network layer to obtain sensing vectors, and performing normalization processing on the sensing vectors to obtain a first output probability of each text vocabulary in the input text;
decoding the perception vector and the input text based on the plurality of decoders and the text code to obtain decoding information;
predicting the decoding information based on the prediction output layer to obtain a second output probability of each template vocabulary in the plurality of decoders;
And generating a question answer of the to-be-tested question according to the first output probability and the second output probability.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. The units or means may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (10)

1. An answer generation method, characterized in that the answer generation method comprises:
generating an input text according to the acquired to-be-tested problem, the problem type of the to-be-tested problem, the evidence text corresponding to the to-be-tested problem and the historical answer corresponding to the historical problem associated with the to-be-tested problem;
acquiring an answer recognition model which is trained in advance, wherein the answer recognition model comprises a plurality of encoders, a perception network layer, a plurality of decoders and a prediction output layer;
Encoding the input text based on the plurality of encoders to obtain text codes;
performing sensing processing on the text codes based on the sensing network layer to obtain sensing vectors, and performing normalization processing on the sensing vectors to obtain a first output probability of each text vocabulary in the input text;
decoding the perception vector and the input text based on the plurality of decoders and the text code to obtain decoding information;
predicting the decoding information based on the prediction output layer to obtain a second output probability of each template vocabulary in the plurality of decoders;
and generating a question answer of the to-be-tested question according to the first output probability and the second output probability.
2. The answer generation method of claim 1, wherein the generating input text according to the acquired question to be tested, the type of the question to be tested, the evidence text corresponding to the question to be tested, and the historical answer corresponding to the historical question associated with the question to be tested comprises:
extracting the query words in the to-be-tested problem;
identifying the problem type according to the matching of the query vocabulary and a preset vocabulary;
Identifying the generation time and the problem time of the problem to be detected;
acquiring a request problem according to the generation time and the problem scene, and acquiring the request time of the request problem;
screening the historical problems from the request problems according to the request time;
obtaining answer texts corresponding to the historical questions as the historical answers;
and splicing the question type, the question to be detected, a preset identifier, the evidence text and the historical answer to be used as the input text.
3. The answer generation method of claim 1 in which said encoding said input text based on said plurality of encoders comprises:
characterizing each text vocabulary in the input text to obtain a text vector;
for any encoder, generating a vector of interest for the text vector based on each set of weight matrices;
splicing a plurality of the attention vectors to obtain a spliced vector;
calculating the product of the configuration matrix and the splicing vector to obtain the attention information;
performing full-connection operation on the attention information according to a preset matrix and a preset bias value to obtain an initial vector, wherein the generation formula of the initial vector is as follows: y=max (a, xW 1 + 1 ) 2 + 2 Wherein y represents the initial vector, a represents a preset constant, x represents the attention information, W 1 W and W 2 Representing the preset matrix, b 1 B 2 Representing the preset bias value;
and taking the initial vector as a text vector of the next encoder to carry out encoding processing until the plurality of encoders participate in encoding, so as to obtain the text encoding.
4. The answer generation method of claim 1 in which said performing a perceptual process on said text code based on said perceptual network layer to obtain a perceptual vector comprises:
for each hidden layer of the perception network layer, performing full-connection operation on the text codes based on network parameters of neurons in the hidden layer to obtain an output vector of the hidden layer;
and taking the output vector as a text code of the next hidden layer to perform perception processing until each hidden layer of the perception network layer participates in processing to obtain the perception vector.
5. The answer generation method of claim 1 in which said decoding the perceptual vector and the input text based on the plurality of decoders and the text encoding, to obtain decoded information comprises:
Masking the input text to obtain a masking vector;
generating a target vector according to the perception vector and the mask vector;
for any decoder, performing attention analysis on the target vector based on the any decoder and the text code to obtain attention information;
and performing full connection processing on the attention information based on the decoding weight matrix and the decoding bias of any decoder to obtain initial decoding, and performing decoding processing on the initial decoding serving as a target vector of the next decoder until the plurality of decoders participate in decoding to obtain the decoding information.
6. The answer generation method of claim 1 in which predicting the decoded information based on the prediction output layer to obtain a second output probability for each of the plurality of template words comprises:
activating the decoding information based on an activating function of the prediction output layer to obtain activating information;
and normalizing the activation information to obtain the second output probability.
7. The answer generation method of claim 1 in which said generating a question answer to said question to be tested based on said first output probability and said second output probability comprises:
Performing weighted sum operation on the first output probability and the second output probability to obtain target probabilities of the text vocabulary and the template vocabulary;
determining the vocabulary with the target probability larger than a preset probability threshold as a target vocabulary;
and generating the answer to the question according to the target vocabulary.
8. An answer generation device, characterized in that the answer generation device comprises:
the generating unit is used for generating an input text according to the acquired to-be-detected problem, the problem type of the to-be-detected problem, the evidence text corresponding to the to-be-detected problem and the historical answer corresponding to the historical problem associated with the to-be-detected problem;
the system comprises an acquisition unit, a prediction output layer and a prediction output layer, wherein the acquisition unit is used for acquiring a pre-trained answer recognition model, and the answer recognition model comprises a plurality of encoders, a perception network layer, a plurality of decoders and the prediction output layer;
the encoding unit is used for encoding the input text based on the plurality of encoders to obtain text codes;
the sensing unit is used for sensing the text codes based on the sensing network layer to obtain sensing vectors, and normalizing the sensing vectors to obtain a first output probability of each text vocabulary in the input text;
The decoding unit is used for decoding the perception vector and the input text based on the plurality of decoders and the text codes to obtain decoding information;
the prediction unit is used for predicting the decoding information based on the prediction output layer to obtain a second output probability of each template vocabulary in the plurality of decoders;
the generating unit is further configured to generate a question answer of the to-be-tested question according to the first output probability and the second output probability.
9. An electronic device, the electronic device comprising:
a memory storing computer readable instructions; a kind of electronic device with high-pressure air-conditioning system
A processor executing computer readable instructions stored in the memory to implement the answer generation method of any one of claims 1 to 7.
10. A computer-readable storage medium, characterized by: the computer-readable storage medium has stored therein computer-readable instructions that are executed by a processor in an electronic device to implement the answer generation method of any one of claims 1 to 7.
CN202310593894.9A 2023-05-24 2023-05-24 Answer generation method, device, equipment and storage medium Pending CN116628161A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310593894.9A CN116628161A (en) 2023-05-24 2023-05-24 Answer generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310593894.9A CN116628161A (en) 2023-05-24 2023-05-24 Answer generation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116628161A true CN116628161A (en) 2023-08-22

Family

ID=87620822

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310593894.9A Pending CN116628161A (en) 2023-05-24 2023-05-24 Answer generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116628161A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117743559A (en) * 2024-02-20 2024-03-22 厦门国际银行股份有限公司 Multi-round dialogue processing method, device and equipment based on RAG

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117743559A (en) * 2024-02-20 2024-03-22 厦门国际银行股份有限公司 Multi-round dialogue processing method, device and equipment based on RAG

Similar Documents

Publication Publication Date Title
CN112949786B (en) Data classification identification method, device, equipment and readable storage medium
CN111694826B (en) Data enhancement method and device based on artificial intelligence, electronic equipment and medium
CN112989826B (en) Test question score determining method, device, equipment and medium based on artificial intelligence
CN113656547B (en) Text matching method, device, equipment and storage medium
CN114090794A (en) Event map construction method based on artificial intelligence and related equipment
CN113722474A (en) Text classification method, device, equipment and storage medium
CN116113356A (en) Method and device for determining user dementia degree
CN116628161A (en) Answer generation method, device, equipment and storage medium
CN116130072A (en) Department recommendation method, device, equipment and storage medium
CN113724830B (en) Medication risk detection method based on artificial intelligence and related equipment
CN114281931A (en) Text matching method, device, equipment, medium and computer program product
CN112435745B (en) Method and device for recommending treatment strategy, electronic equipment and storage medium
CN113536770A (en) Text analysis method, device and equipment based on artificial intelligence and storage medium
CN116468043A (en) Nested entity identification method, device, equipment and storage medium
CN114360732B (en) Medical data analysis method, device, electronic equipment and storage medium
CN113470775B (en) Information acquisition method, device, equipment and storage medium
CN113326365B (en) Reply sentence generation method, device, equipment and storage medium
CN113535925B (en) Voice broadcasting method, device, equipment and storage medium
CN113269179B (en) Data processing method, device, equipment and storage medium
CN111680515B (en) Answer determination method and device based on AI (Artificial Intelligence) recognition, electronic equipment and medium
CN114942749A (en) Development method, device and equipment of approval system and storage medium
CN114155957A (en) Text determination method and device, storage medium and electronic equipment
CN113920564A (en) Client mining method based on artificial intelligence and related equipment
CN113705092A (en) Disease prediction method and device based on machine learning
CN113408265A (en) Semantic analysis method, device and equipment based on human-computer interaction and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination