CN109597993B - Statement analysis processing method, device, equipment and computer readable storage medium - Google Patents

Statement analysis processing method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN109597993B
CN109597993B CN201811464437.5A CN201811464437A CN109597993B CN 109597993 B CN109597993 B CN 109597993B CN 201811464437 A CN201811464437 A CN 201811464437A CN 109597993 B CN109597993 B CN 109597993B
Authority
CN
China
Prior art keywords
word slot
intention
similarity score
training model
word
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811464437.5A
Other languages
Chinese (zh)
Other versions
CN109597993A (en
Inventor
汤耀华
莫凯翔
张超
徐倩
杨强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN201811464437.5A priority Critical patent/CN109597993B/en
Priority to PCT/CN2019/081282 priority patent/WO2020107765A1/en
Publication of CN109597993A publication Critical patent/CN109597993A/en
Application granted granted Critical
Publication of CN109597993B publication Critical patent/CN109597993B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses a statement analysis processing method, a statement analysis processing device, equipment and a storage medium, wherein the method comprises the following steps: acquiring a pre-training model on a large sample data set in a source field, and transferring and learning the pre-training model to a target field; in the target field, acquiring each sentence characteristic of a preset question in a pre-training model, and performing semantic analysis on each sentence characteristic to determine each different intention corresponding to the preset question; acquiring intention similarity scores of the intentions in a pre-training model, and determining the highest intention similarity score in the intention similarity scores; obtaining each word slot in the pre-training model, determining word slot similarity scores of each word slot in the pre-training model, and determining the highest word slot similarity score in each word slot similarity score; and acquiring and outputting a final intention corresponding to the highest intention similarity score and a final word slot corresponding to the highest word slot similarity score. The technical effect that the model can quickly learn and execute the spoken language understanding task when being migrated to a new field is achieved.

Description

Statement analysis processing method, device, equipment and computer readable storage medium
Technical Field
The present invention relates to the field of migration learning technologies, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for sentence analysis processing.
Background
The spoken language understanding model in an artificial intelligence conversational robot can play a critical role in helping the robot understand the user's intent. With the widespread use of artificial intelligence dialog robots, such as Alexa by Amazon, Microsoft's small ice robot, and siri by apple. The spoken language understanding capability of the robot is particularly important, and not only the common requirement scene of the user needs to be understood, but also the understanding capability of the robot needs to be continuously expanded to a new requirement scene of the user. Support for new user demand scenarios generally requires data collection and annotation, while currently adopted technical solutions generally involve rule matching or training data addition. This process is time consuming and expensive, and requires a specialized labeling team. Therefore, after learning a spoken language understanding model in a scene with a large amount of data, for a new scene field, it is a technical problem that a spoken language understanding task cannot be rapidly learned and executed because there are only a small number of samples or zero samples.
Disclosure of Invention
The invention mainly aims to provide a statement analysis processing method, a statement analysis processing device, statement analysis processing equipment and a computer readable storage medium, and aims to solve the technical problem that after a model is migrated to a new field, a spoken language understanding task cannot be rapidly learned and executed because only a small number of samples or zero samples exist.
In order to achieve the above object, the present invention provides a sentence analyzing and processing method, including the following steps:
acquiring a pre-training model on a large sample data set in a source field, and transferring and learning the pre-training model to a target field;
in the target field, obtaining each sentence characteristic of a preset question in the pre-training model, and performing semantic analysis on each sentence characteristic to determine each different intention corresponding to the preset question;
acquiring intention similarity scores of the intentions in a pre-training model, and determining the highest intention similarity score in the intention similarity scores;
obtaining each word slot in the pre-training model, determining word slot similarity scores of each word slot in the pre-training model, and determining the highest word slot similarity score in each word slot similarity score, wherein a preset question sentence is analyzed according to the pre-training model to obtain each word slot in the pre-training model;
and acquiring a final intention corresponding to the highest intention similarity score and a final word slot corresponding to the highest word slot similarity score, and outputting the final intention and the final word slot.
Optionally, the step of obtaining an intention similarity score of each intention in a pre-training model and determining a highest intention similarity score among the intention similarity scores includes:
acquiring a first state vector in the pre-training model;
acquiring intention name semantic vectors corresponding to the intentions, and calculating intention similarity scores between the intention name semantic vectors and the first state vectors;
comparing each of the intention similarity scores to obtain a highest intention similarity score among the intention similarity scores.
Optionally, the step of obtaining an intention name semantic vector corresponding to each intention includes:
obtaining statement information in the intention, and determining statement semantic vectors corresponding to the statement information;
and acquiring an average vector value of each sentence vector, and taking the average vector value as the intention name semantic vector.
Optionally, the step of obtaining each word slot in the pre-training model and determining a word slot similarity score of each word slot in the pre-training model includes:
acquiring word slots in the pre-training model;
acquiring a word slot name and an integral word slot value of the word slot, and determining a first similarity score of the word slot name and a second similarity score of the integral word slot value;
and determining a word bin similarity score for the word bin based on a sum of the first similarity score and the second similarity score.
Optionally, the step of determining a first similarity score of the word slot name and a second similarity score of the whole word slot value includes:
acquiring a current position state in the pre-training model, and determining a second state vector of the current position state;
acquiring a word slot name semantic vector corresponding to the word slot name, and determining a first similarity score between the word slot name semantic vector and the second state vector;
and acquiring a value semantic vector corresponding to the value of the whole word slot, and determining a second similarity score between the value semantic vector and the second state vector.
Optionally, the step of obtaining a value semantic vector corresponding to the value of the whole word slot includes:
obtaining each sub-word slot value in the word slot, and determining a sub-value semantic vector corresponding to each sub-word slot value;
calculating a third similarity score between the sub-value vector and the second state vector, and obtaining a vector product between the third similarity score and the sub-value vector;
and acquiring vector products corresponding to the values of the sub word slots, and adding the vector products to acquire a value semantic vector corresponding to the value of the whole word slot.
Optionally, the step of obtaining word slots in the pre-training model includes:
acquiring a preset question in the pre-training model;
and performing semantic analysis on the preset question in the target field to determine each word slot in the pre-training model.
In order to achieve the above object, the present invention also provides a sentence analyzing and processing apparatus including:
the migration module is used for acquiring a pre-training model on a large sample data set in a source field and migrating and learning the pre-training model to a target field;
the determining module is used for acquiring each sentence characteristic of a preset question in the pre-training model in the target field and performing semantic analysis on each sentence characteristic to determine each different intention corresponding to the preset question;
the first acquisition module is used for acquiring intention similarity scores of all intentions in a pre-training model and determining the highest intention similarity score in all the intention similarity scores;
a second obtaining module, configured to obtain each word slot in the pre-training model, determine a word slot similarity score of each word slot in the pre-training model, and determine a highest word slot similarity score in each word slot similarity score, where a preset question sentence is analyzed according to the pre-training model to obtain each word slot in the pre-training model;
and the output module is used for acquiring the final intention corresponding to the highest intention similarity score and the final word slot corresponding to the highest word slot similarity score and outputting the final intention and the final word slot.
In addition, in order to achieve the above object, the present invention also provides a mobile terminal;
the mobile terminal includes: a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein:
the computer program, when executed by the processor, implements the steps of the statement analysis processing method as described above.
In addition, to achieve the above object, the present invention also provides a storage medium;
the storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the sentence analysis processing method as described above.
In the embodiment, a pre-training model on a large sample data set in a source field is obtained, and the pre-training model is migrated and learned to a target field; in the target field, obtaining each sentence characteristic of a preset question in the pre-training model, and performing semantic analysis on each sentence characteristic to determine each different intention corresponding to the preset question; acquiring intention similarity scores of the intentions in a pre-training model, and determining the highest intention similarity score in the intention similarity scores; obtaining each word slot in the pre-training model, determining word slot similarity scores of each word slot in the pre-training model, and determining the highest word slot similarity score in each word slot similarity score; and acquiring a final intention corresponding to the highest intention similarity score and a final word slot corresponding to the highest word slot similarity score, and outputting the highest intention and the final word slot. The method has the advantages that a simple classification model in a principle model is replaced by a mode of calculating the similarity score of the intention and the similarity score of the word slot, the problem of migration from the source field to the target field can be well solved, after the model migrates from the source field to the target field, a user does not need to redesign and plan, the expandability is realized, training data does not need to be added again, labor cost is saved, and the technical problem that after the model migrates to the new field, a spoken language understanding task cannot be learned and executed quickly due to the fact that only a small number of samples or zero samples exist is solved.
Drawings
FIG. 1 is a schematic diagram of a terminal \ device structure of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a sentence analyzing and processing method according to a first embodiment of the present invention;
FIG. 3 is a flowchart illustrating a sentence analyzing and processing method according to a second embodiment of the present invention;
FIG. 4 is a functional block diagram of a sentence analyzing and processing apparatus according to the present invention;
FIG. 5 is a diagram of a model network structure of the sentence analyzing and processing method of the present invention.
The objects, features and advantages of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention.
The terminal of the embodiment of the invention is statement analysis and processing equipment.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the terminal may further include a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WiFi module, and the like. Such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that turns off the display screen and/or the backlight when the terminal device is moved to the ear. Of course, the terminal device may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a sentence analysis processing program.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to call the statement analysis handler stored in the memory 1005, and perform the following operations:
acquiring a pre-training model on a large sample data set in a source field, and transferring the pre-training model to a target field;
obtaining each sentence characteristic in a preset question sentence in the pre-training model, and performing semantic analysis on each sentence characteristic to determine each different intention corresponding to the preset question sentence;
acquiring intention similarity scores of the intentions in a pre-training model, and determining the highest intention similarity score in the intention similarity scores;
obtaining each word slot in the pre-training model, determining word slot similarity scores of each word slot in the pre-training model, and determining the highest word slot similarity score in each word slot similarity score;
and acquiring the highest intention corresponding to the highest intention similarity score and the highest word slot corresponding to the highest word slot similarity score, and outputting the highest intention and the highest word slot.
The invention provides a statement analysis processing method, in one embodiment of the statement analysis processing method, the statement analysis processing method comprises the following steps:
step S10, obtaining a pre-training model on a large sample data set in a source field, and transferring and learning the pre-training model to a target field;
the source domain may be a mature application scenario with a large amount of annotation data to train the respective model. The target domain may be a new application scenario, with little or no annotation data present. The transfer learning is to share the model parameters trained in the original field to the model of the new target field in a certain way to help the training of the new model, and the basic principle is that the data or tasks of the source field and the target field are related. The method comprises the steps of training a preset number of models on a large sample data set in a source field, selecting one of the models which has the best performance on the data set as a pre-training model, then migrating the pre-training model to a small sample scene in a target field, collecting partial user question sentences in the small sample scene in the target field, designing an intention/word slot frame according to the user question sentences, and marking data according to the frame by an organizer. It should be noted that, in different scenarios, the pre-trained model architecture used is the same, but the pre-trained model is refined on the labeled small sample data. In the finetune process, all parameters of the large sample model are taken to initialize parameters of the small sample model, and then training fine tuning is performed on the small sample labeling data of the new scene. And when the preset training model is successfully trained in the small sample scene in the target field and the small sample model is obtained, the obtained small sample model is interacted with an actual user for use, questions are continuously collected in the use process of the user, the training set is expanded, and the small sample model is promoted by the expanded data set.
Step S20, in the target field, obtaining each sentence characteristic of a preset question in the pre-training model, and performing semantic analysis on each sentence characteristic to determine each different intention corresponding to the preset question;
the intention is to identify what the user specifically expresses is what to do, specifically the intention is a classifier that classifies the user's needs into certain categories. For example: the sentence "I want to specify a ticket from Beijing to Shanghai" is that the user expresses his request, which can be defined as "inform" intention; "are the tickets all of several? The phrase "indicates that the user is querying the ticket information, which may be defined as a" request "intent. In a small sample scene in a target field, after a preset question is acquired from a pre-training model, sentence words or Chinese phrases and the like forming the preset question are also required to be acquired. Then, the input sentence words are replaced by corresponding word embedding in an embedding layer in the pre-training model, each sentence feature is extracted through a bidirectional LSTM network architecture in a common representation layer (common feature extraction layer) in the pre-training model, and semantic analysis is performed on the sentence features, so as to determine different intentions, wherein in a real application, each intention is expressed by several words, such as 'confirmation of purchase'. Among them, LSTM (Long Short-Term Memory) is a Long Short-Term Memory network, a time recurrent neural network, and is suitable for processing and predicting important events with relatively Long interval and delay in time sequence.
Step S30, obtaining intention similarity scores of the intentions in a pre-training model, and determining the highest intention similarity score in the intention similarity scores;
in the Intent task layer of the pre-training model, the features obtained by the common representation layer are further abstracted by using the bidirectional LSTM layer, and then the last state of each direction of the bidirectional LSTM is spliced and recorded as hintent. The expression words of each intention name (intent name) in the pre-training model are converted into semantic vectors with fixed length similar to embedding through semantic network, and then the semantic vectors are taken together with hintentAnd performing bilinear operation to obtain intention similarity scores of the intentions, wherein each intention is obtained by adopting the same method to obtain the intention similarity score corresponding to the intention, so that the highest intention similarity score with the highest score can be obtained by comparing the sizes of the intention similarity scores.
To assist in understanding the architecture and bilinear operation of the semantic network of the present invention, an example is provided below.
For example, assume an intentional graph name sni=(w1,w2...wn) The Semantic network first replaces each word with a corresponding word embedding: e (w)i). Then E (w) is connected by using a layer of Deep Neural Network (DNN) Networki) And carrying out nonlinear mapping to obtain a semantic vector of the word, and finally averaging the semantic vectors of all n words to obtain the semantic vector of the intention name. Bilinear operation inputs two vectors V1And V2The following matrix operations are performed: score ═ vT 1Wv2And obtaining a similarity score of the two vectors.
Step S40, obtaining each word slot in the pre-training model, determining word slot similarity scores of each word slot in the pre-training model, and determining the highest word slot similarity score in each word slot similarity score;
the word slot is defined for key information in user expression, for example, in the expression of booking air ticket, our slot has "departure time", "origin" and "destination", and these three key information need to be identified. When obtaining each word Slot in the pre-training model and determining the word Slot similarity corresponding to each word Slot, it is necessary to determine the state of the current position at the Slot task layer in the pre-training model, specifically, the states of the bidirectional LSTM of the common representation layer and the bidirectional LSTM of the Intent task layer are spliced together at each input position as the state of the current position, and the state at the time of t is ht slot. As with the graph names, we convert the expression words for each word slot name (slot name) into a semantic vector r using semantic networki slotname. Meanwhile, the ith word slot may have a plurality of values, each value can be converted into a semantic vector through a semantic network, and the semantic vector of the jth value is recorded as ri,j slotvalue. It should be noted that after the scoring of all values is normalized, the weighted average is performed with the semantic vector of the corresponding value to obtain the semantic vector r of the whole word slot valuei slotvalue. Reuse ri slotvalueAnd ht slotAnd performing secondary linear operation to obtain the similarity score of the value of the word slot. The similarity score of the word slot name and the similarity score of the word slot value are added to obtain the state h of the word slot and the current positiont slotAnd scoring the total similarity, namely scoring the word groove similarity. The highest word bin similarity score is then determined among the individual word bin similarity scores.
Step S50, obtaining a final intention corresponding to the highest intention similarity score and a final word slot corresponding to the highest word slot similarity score, and outputting the final intention and the final word slot.
In the pre-training model, the intention corresponding to the highest intention similarity score is used as a final intention, the word slot corresponding to the highest word slot similarity score is used as a final word slot, and then the final word slot and the final intention are output.
To assist in understanding the structural flow of the pre-trained model of the present invention, the following description is given by way of example.
For example, as shown in FIG. 5, the model is divided into an Embeddings layer, a Common replication layer, an Intent Task layer, and a Slot Task layer. Wherein, the Embeddings layer replaces the input sentence words with the corresponding word embedding, such as W0,Wt,WT+1And the like. The Common retrieval layer, the Intent Task layer and the Slot Task layer all adopt a bidirectional LSTM network architecture. In the Intent Task layer, the features obtained by the common representation layer are further abstracted by using a bidirectional LSTM layer, and then the last state of each direction of the bidirectional LSTM is spliced together and recorded as hintentThen h is addedintentAnd (4) carrying out Similarity comparison with each intention (Intent1, Intent2 and Intent3) to obtain a value with the maximum Similarity, namely Softmax, and then outputting the intention with the maximum Similarity, namely tau in the graph. The same method is adopted for outputting the final word slot, namely the state of the current position is determined firstly, and the state of the moment t is recorded as ht slotThrough Slot Value 1, Slot Value 2, through Slot Value n and ht slotSimilarity comparison is carried out, namely the Semantic vector r of the whole word slot value is obtained by carrying out weighted average on the Semantic vector of the corresponding value after normalization processing is carried out on Similarity scoring of all values, namely the Semantic Similarity, Attention in the graphi slotvalue. Reuse ri slotvalueAnd ht slotAnd performing secondary linear operation to obtain the similarity score of the value of the word slot. Meanwhile, the slot names and h are required to be respectively arrangedt slotSimilarity comparison is performed to obtain similarity scores for the word slot names. The similarity score of the word slot name and the similarity score of the word slot value are added to obtain the state h of the word slot and the current positiont slotTotal similarity was scored. Then, the highest word-bin similarity score among the word-bin similarity scores is determined and output to St in the graph.
In the embodiment, a pre-training model on a large sample data set in a source field is obtained, and the pre-training model is migrated and learned to a target field; in the target field, obtaining each sentence characteristic of a preset question in the pre-training model, and performing semantic analysis on each sentence characteristic to determine each different intention corresponding to the preset question; acquiring intention similarity scores of the intentions in a pre-training model, and determining the highest intention similarity score in the intention similarity scores; obtaining each word slot in the pre-training model, determining word slot similarity scores of each word slot in the pre-training model, and determining the highest word slot similarity score in each word slot similarity score; and acquiring a final intention corresponding to the highest intention similarity score and a final word slot corresponding to the highest word slot similarity score, and outputting the highest intention and the final word slot. The method has the advantages that a simple classification model in a principle model is replaced by a mode of calculating the similarity score of the intention and the similarity score of the word slot, the problem of migration from the source field to the target field can be well solved, after the model migrates from the source field to the target field, a user does not need to redesign and plan, the expandability is realized, training data does not need to be added again, labor cost is saved, and the technical problem that after the model migrates to the new field, a spoken language understanding task cannot be learned and executed quickly due to the fact that only a small number of samples or zero samples exist is solved.
Further, on the basis of the first embodiment of the present invention, a second embodiment of the sentence analyzing and processing method of the present invention is provided, and this embodiment is step S30 of the first embodiment of the present invention, a refinement of the step of obtaining the intention similarity score of each intention in the pre-trained model and determining the highest intention similarity score among the intention similarity scores is provided, with reference to fig. 3, and includes:
step S31, acquiring a first state vector in the pre-training model;
step S32, obtaining intention name semantic vectors corresponding to the intentions, and calculating intention similarity scores between the intention name semantic vectors and the first state vector;
the first state vector may be a state vector obtained by further abstracting the features obtained from the common representation layer using the bi-directional LSTM layer and then concatenating the last state of each direction of the bi-directional LSTM at the Intent task layer in the model. The intention name is the name of the intention, and the expression words of the intention. After the first state vector in the pre-training model is obtained, the intention name semantic vectors corresponding to the intents are obtained again, and then the intention name semantic vectors and the first state vector are subjected to secondary linear operation, so that the intention similarity score is obtained. And since each intention has an intention similarity score corresponding to the intention, and the obtaining method is basically the same, the intention similarity score corresponding to each intention can be obtained by adopting the method.
In step S33, the intention similarity scores are compared to obtain the highest intention similarity score among the intention similarity scores.
When the intention similarity score of each intention is acquired, the intention similarity scores are subjected to size comparison, and an intention similarity score with the highest score is determined and is taken as the highest intention similarity score. It should be noted that each of the intent similarity scores needs to be compared with other intent similarity scores.
In the embodiment, the similarity score of each intention is determined by determining the similarity score between each intention name semantic vector and the first state vector, so that the accuracy of determining the intention of the user is ensured, and the use experience of the user is improved.
Specifically, the step of obtaining an intention name semantic vector corresponding to each intention includes:
step S321, obtaining each statement information in the intention, and determining statement semantic vectors corresponding to each statement information;
obtaining the intention name semantic vector corresponding to the intention requires first obtaining all statement information in the intention and determining the statement semantic vector corresponding to each statement information. For example, assume that there is a hypothetical intent name sni=(w1,w2...wn) The Semantic network first replaces each word with the corresponding word embedding: E (w)i). Then use oneLayer DNN (Deep Neural Network) Network E (w)i) And carrying out nonlinear mapping to obtain a semantic vector of the word.
Step S322, obtaining an average vector value of each sentence vector, and taking the average vector value as the intention name semantic vector.
After each statement vector is obtained in the model, an average value of each statement vector, that is, an average vector value, needs to be determined, and the average vector value is used as an intention name semantic vector.
In the embodiment, sentence semantic vectors corresponding to all sentence information in the intention are determined, and the average value of the sentence semantic vectors is taken as the intention name semantic vector, so that the accuracy of detecting the intention similarity is improved, and the use experience of a user is guaranteed.
Further, on the basis of any one of the first embodiment to the second embodiment of the present invention, a third embodiment of the sentence analyzing and processing method of the present invention is provided, where this embodiment is the step S40 of the first embodiment of the present invention, the step of obtaining each word slot in the pre-training model, and determining the word slot similarity score of each word slot in the pre-training model is refined, and the method includes:
step S41, acquiring word slots in the pre-training model;
step S42, acquiring the word slot name and the whole word slot value of the word slot, and determining a first similarity score of the word slot name and a second similarity score of the whole word slot value;
the first similarity score may be a similarity score between the word slot name and the current location state. The second similarity score may be a similarity score between the overall word bin value and the current location state. In practical applications, the word slot is generally expressed by one or more words, such as "food", and generally each word slot has some possible values, such as "food", which can be easily obtained: cake, apple, lamb leg, etc. Determining each possible word slot by analyzing a preset question in a pre-training model, then determining the word slot name and the whole word slot value of the word slot, determining the word slot name semantic vector corresponding to the word slot name and the value semantic vector corresponding to the whole word slot value, splicing the states of the bidirectional LSTM of the common representation layer and the bidirectional LSTM of the internal task layer at each input position in the internal task layer to be used as the state of the current position, namely a state vector, then performing secondary linear operation on the word slot name semantic vector and the state vector to obtain a first similarity score corresponding to the word slot name, and performing secondary linear operation on the semantic value vector and the state vector to obtain a second similarity score corresponding to the whole word slot value. For example, when there are three word slot vectors a1, a2, A3 in the word slot, the three vectors are respectively operated with the current state vector to obtain a score, and then the three scores are normalized to become C1, C2, C3, and then a1 × C1+ a2 × C2+ A3 × C3 is the semantic vector of the whole word slot value. The word slot name is the name of the slot, and the expression word of the slot is used. The overall word bin value may be a word bin value that is associated with each word bin value.
Step S43, determining the word slot similarity score of the word slot according to the sum of the first similarity score and the second similarity score.
After the first similarity score and the second similarity score are obtained, the first similarity score corresponding to the word slot name and the second similarity score corresponding to the whole word slot value are added to obtain a sum value, and the sum value is used as the word slot similarity score of the word slot and the current position.
In this embodiment, the word slot similarity of the word slot is determined by determining the first similarity of the word slot name and the second similarity of the whole word slot value, so that the accuracy of determining the word slot similarity is improved.
Specifically, the step of determining a first similarity score of the word slot name and a second similarity score of the whole word slot value includes:
step S421, obtaining the current position state in the pre-training model, and determining a second state vector of the current position state;
the states of the bidirectional LSTM of the common representation layer and the bidirectional LSTM of the Intent task layer are spliced together at each input position in the Intent task layer in the pre-trained model as the state of the current position, i.e., the second state vector.
Step S422, acquiring a word slot name semantic vector corresponding to the word slot name, and determining a first similarity score between the word slot name semantic vector and the second state vector;
for the word slot name semantic vector of the word slot name, the word slot name semantic vector can be obtained by carrying out nonlinear operation on the word slot name through a layer of DNN network in the preset model, and then the word slot name semantic vector and the second state vector are subjected to secondary linear operation to obtain a first similarity score.
Step S423, obtaining a value semantic vector corresponding to the value of the whole word slot, and determining a second similarity score between the value semantic vector and the second state vector.
Obtaining the semantic vector corresponding to the value of the whole word slot can calculate the semantic vector of each word slot value in the word slot, then determine the similarity score of the semantic vectors, and after normalization processing is performed on the similarity score, weighted average is performed on the semantic vector of the value of the corresponding word slot, so that the value semantic vector corresponding to the value of the whole word slot is obtained, and then secondary linear operation is performed on the value semantic vector and the second state vector so as to obtain a second similarity score.
In this embodiment, the first similarity of the word slot name and the second similarity of the whole word slot value are determined by determining the current position state in the pre-training model, so that whether the word slot in the system is required by the user is ensured, and the use experience of the user is improved.
Specifically, the step of obtaining a value semantic vector corresponding to the value of the whole word slot includes:
step A10, obtaining each sub-word slot value in the word slot, and determining a sub-value semantic vector corresponding to each sub-word slot value;
the sub-word slot value can be any one of the word slots. And obtaining all sub-word slot values in the word slot, and performing nonlinear operation on the sub-word slot values through a layer of DNN network in the preset model to obtain sub-value semantic vectors corresponding to the sub-word slot values.
Step A11, calculating a third similarity score between the sub-value vector and the second state vector, and obtaining a vector product between the third similarity score and the sub-value vector;
the third similarity score may be a similarity score between any one of the word slot values and the current position state. And calculating a third similarity score between the sub-value vector and the state vector through secondary linear operation, and determining a vector product between the third similarity score and the sub-value vector.
Step A12, obtaining vector products corresponding to each sub-word slot value, and adding the vector products to obtain a value semantic vector corresponding to the whole word slot value.
And obtaining vector products corresponding to all the sub-word slot values, then adding all the vector products to obtain a sum value, and finally taking the sum value as a value semantic vector corresponding to the whole word slot value.
In this embodiment, the value semantic vector corresponding to the whole word slot value is determined according to all the sub-word slot values, so that the correlation between the value semantic vector and all the word slot values in the word slot is ensured, the accuracy of the value semantic vector is ensured, and the experience of a user is improved.
Specifically, the step of obtaining each word slot in the pre-training model includes:
step S411, acquiring a preset question sentence in the pre-training model;
step S412, performing semantic analysis on the preset question in the target field to determine each word slot in the pre-training model.
In the pre-training model, because word slots required by each preset question are different, the preset question in the pre-training model needs to be acquired, and semantic analysis is performed on the preset question, so that each word slot in the pre-training model is determined. For example, when semantic analysis is performed on a preset question and it is found that something related to food is needed, the word groove names may be food, and each word groove in the word grooves may be a cake, an apple, a roasted lamb leg, and the like.
In the embodiment, each word slot in the pre-training model is determined according to the preset question in the target field, so that each word slot is ensured to be related to the preset question, the phenomenon that irrelevant word slots occupy word slot space is avoided, resources are saved, and the use experience of a user is improved.
Furthermore, referring to fig. 4, an embodiment of the present invention further provides a sentence analysis processing apparatus, including:
the migration module is used for acquiring a pre-training model on a large sample data set in a source field and migrating and learning the pre-training model to a target field;
the determining module is used for acquiring each sentence characteristic of a preset question in the pre-training model in the target field and performing semantic analysis on each sentence characteristic to determine each different intention corresponding to the preset question;
the first acquisition module is used for acquiring intention similarity scores of all intentions in a pre-training model and determining the highest intention similarity score in all the intention similarity scores;
a second obtaining module, configured to obtain each word slot in the pre-training model, determine a word slot similarity score of each word slot in the pre-training model, and determine a highest word slot similarity score in each word slot similarity score;
and the output module is used for acquiring the final intention corresponding to the highest intention similarity score and the final word slot corresponding to the highest word slot similarity score and outputting the highest intention and the final word slot.
Optionally, the first obtaining module is further configured to:
acquiring a first state vector in the pre-training model;
acquiring intention name semantic vectors corresponding to the intentions, and calculating intention similarity scores between the intention name semantic vectors and the first state vectors;
comparing each of the intention similarity scores to obtain a highest intention similarity score among the intention similarity scores.
Optionally, the first obtaining module is further configured to:
obtaining statement information in the intention, and determining statement semantic vectors corresponding to the statement information;
and acquiring an average vector value of each sentence vector, and taking the average vector value as the intention name semantic vector.
Optionally, the second obtaining module is further configured to:
acquiring word slots in the pre-training model;
acquiring a word slot name and an integral word slot value of the word slot, and determining a first similarity score of the word slot name and a second similarity score of the integral word slot value;
and determining a word bin similarity score for the word bin based on a sum of the first similarity score and the second similarity score.
Optionally, the second obtaining module is further configured to:
acquiring a current position state in the pre-training model, and determining a second state vector of the current position state;
acquiring a word slot name semantic vector corresponding to the word slot name, and determining a first similarity score between the word slot name semantic vector and the second state vector;
and acquiring a value semantic vector corresponding to the value of the whole word slot, and determining a second similarity score between the value semantic vector and the second state vector.
Optionally, the second obtaining module is further configured to:
obtaining each sub-word slot value in the word slot, and determining a sub-value semantic vector corresponding to each sub-word slot value;
calculating a third similarity score between the sub-value vector and the second state vector, and obtaining a vector product between the third similarity score and the sub-value vector;
and acquiring vector products corresponding to the values of the sub word slots, and adding the vector products to acquire a value semantic vector corresponding to the value of the whole word slot.
Optionally, the second obtaining module is further configured to:
acquiring a preset question in the pre-training model;
and performing semantic analysis on the preset question in the target field to determine each word slot in the pre-training model.
The steps implemented by the functional modules of the statement analysis processing apparatus may refer to the embodiments of the statement analysis processing method of the present invention, and are not described herein again.
The present invention also provides a terminal, including: a memory, a processor, a communication bus, and a statement analysis handler stored on the memory:
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is used for executing the statement analysis processing program to realize the steps of the statement analysis processing method.
The present invention also provides a storage medium storing one or more programs, which can be further executed by one or more processors for implementing the steps of the embodiments of the statement analysis processing method.
The specific implementation of the storage medium of the present invention is substantially the same as that of the above-mentioned embodiments of the statement analysis processing method, and is not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A statement analysis processing method is characterized by comprising the following steps:
acquiring a pre-training model on a large sample data set in a source field, and transferring and learning the pre-training model to a target field;
in the target field, obtaining each sentence characteristic of a preset question in the pre-training model, and performing semantic analysis on each sentence characteristic to determine each different intention corresponding to the preset question;
acquiring intention similarity scores of the intentions in a pre-training model, and determining the highest intention similarity score in the intention similarity scores;
obtaining each word slot in the pre-training model, determining word slot similarity scores of each word slot in the training model, and determining the highest word slot similarity score in each word slot similarity score, wherein a preset question sentence is analyzed according to the pre-training model to obtain each word slot in the pre-training model;
and acquiring a final intention corresponding to the highest intention similarity score and a final word slot corresponding to the highest word slot similarity score, and outputting the final intention and the final word slot.
2. The sentence analysis processing method of claim 1 wherein the step of obtaining intent similarity scores for each of the intents in a pre-trained model and determining the highest intent similarity score among each of the intent similarity scores comprises:
acquiring a first state vector in the pre-training model;
acquiring intention name semantic vectors corresponding to the intentions, and calculating intention similarity scores between the intention name semantic vectors and the first state vectors;
comparing each of the intention similarity scores to obtain a highest intention similarity score among the intention similarity scores.
3. The sentence analysis processing method of claim 2, wherein the step of obtaining the intention name semantic vector corresponding to each intention comprises:
obtaining statement information in the intention, and determining statement semantic vectors corresponding to the statement information;
and acquiring an average vector value of each statement semantic vector, and taking the average vector value as the intention name semantic vector.
4. The sentence analysis processing method of claim 1 wherein the step of obtaining word slots in the pre-trained model and determining word slot similarity scores for the word slots in the trained model comprises:
acquiring word slots in the pre-training model;
acquiring a word slot name and an integral word slot value of the word slot, and determining a first similarity score of the word slot name and a second similarity score of the integral word slot value;
determining a word bin similarity score for the word bin based on a sum of the first similarity score and the second similarity score.
5. The sentence analysis processing method of claim 4 wherein the step of determining a first similarity score for the word slot name and a second similarity score for the whole word slot value comprises:
acquiring a current position state in the pre-training model, and determining a second state vector of the current position state;
acquiring a word slot name semantic vector corresponding to the word slot name, and determining a first similarity score between the word slot name semantic vector and the second state vector;
and acquiring a value semantic vector corresponding to the value of the whole word slot, and determining a second similarity score between the value semantic vector and the second state vector.
6. The sentence analysis processing method of claim 5, wherein the step of obtaining a value semantic vector corresponding to the value of the whole word slot comprises:
obtaining each sub-word slot value in the word slot, and determining a sub-value semantic vector corresponding to each sub-word slot value;
calculating a third similarity score between the sub-valued semantic vector and the second state vector, and obtaining a vector product between the third similarity score and the sub-valued semantic vector;
and acquiring vector products corresponding to the values of the sub word slots, and adding the vector products to acquire a value semantic vector corresponding to the value of the whole word slot.
7. The sentence analysis processing method of claim 4 wherein the step of obtaining word slots in the pre-trained model comprises:
acquiring a preset question in the pre-training model;
and performing semantic analysis on the preset question in the target field to determine each word slot in the pre-training model.
8. A sentence analysis processing apparatus, characterized by comprising:
the migration module is used for acquiring a pre-training model on a large sample data set in a source field and migrating and learning the pre-training model to a target field;
the determining module is used for acquiring each sentence characteristic of a preset question in the pre-training model in the target field and performing semantic analysis on each sentence characteristic to determine each different intention corresponding to the preset question;
the first acquisition module is used for acquiring intention similarity scores of all intentions in a pre-training model and determining the highest intention similarity score in all the intention similarity scores;
a second obtaining module, configured to obtain each word slot in the pre-training model, determine a word slot similarity score of each word slot in the pre-training model, and determine a highest word slot similarity score in each word slot similarity score, where a preset question sentence is analyzed according to the pre-training model to obtain each word slot in the pre-training model;
and the output module is used for acquiring the final intention corresponding to the highest intention similarity score and the final word slot corresponding to the highest word slot similarity score and outputting the final intention and the final word slot.
9. A sentence analysis processing apparatus characterized by comprising: a memory, a processor and a statement analysis processing program stored on the memory and executable on the processor, the statement analysis processing program when executed by the processor implementing the steps of the statement analysis processing method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a sentence analysis processing program is stored thereon, which, when executed by a processor, implements the steps of the sentence analysis processing method of any one of claims 1 to 7.
CN201811464437.5A 2018-11-30 2018-11-30 Statement analysis processing method, device, equipment and computer readable storage medium Active CN109597993B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811464437.5A CN109597993B (en) 2018-11-30 2018-11-30 Statement analysis processing method, device, equipment and computer readable storage medium
PCT/CN2019/081282 WO2020107765A1 (en) 2018-11-30 2019-04-03 Statement analysis processing method, apparatus and device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811464437.5A CN109597993B (en) 2018-11-30 2018-11-30 Statement analysis processing method, device, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN109597993A CN109597993A (en) 2019-04-09
CN109597993B true CN109597993B (en) 2021-11-05

Family

ID=65959469

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811464437.5A Active CN109597993B (en) 2018-11-30 2018-11-30 Statement analysis processing method, device, equipment and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN109597993B (en)
WO (1) WO2020107765A1 (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188182B (en) 2019-05-31 2023-10-27 中国科学院深圳先进技术研究院 Model training method, dialogue generating method, device, equipment and medium
CN110309875A (en) * 2019-06-28 2019-10-08 哈尔滨工程大学 A kind of zero sample object classification method based on the synthesis of pseudo- sample characteristics
CN110399492A (en) * 2019-07-22 2019-11-01 阿里巴巴集团控股有限公司 The training method and device of disaggregated model aiming at the problem that user's question sentence
CN110674648B (en) * 2019-09-29 2021-04-27 厦门大学 Neural network machine translation model based on iterative bidirectional migration
CN110909541A (en) * 2019-11-08 2020-03-24 杭州依图医疗技术有限公司 Instruction generation method, system, device and medium
CN111563144B (en) * 2020-02-25 2023-10-20 升智信息科技(南京)有限公司 User intention recognition method and device based on statement context prediction
CN111460118B (en) * 2020-03-26 2023-10-20 聚好看科技股份有限公司 Artificial intelligence conflict semantic recognition method and device
CN111767377B (en) * 2020-06-22 2024-05-28 湖北马斯特谱科技有限公司 Efficient spoken language understanding and identifying method oriented to low-resource environment
CN111738016B (en) * 2020-06-28 2023-09-05 中国平安财产保险股份有限公司 Multi-intention recognition method and related equipment
CN111931512B (en) * 2020-07-01 2024-07-26 联想(北京)有限公司 Statement intention determining method and device and storage medium
CN111859909B (en) * 2020-07-10 2022-05-31 山西大学 Semantic scene consistency recognition reading robot
CN112016300B (en) * 2020-09-09 2022-10-14 平安科技(深圳)有限公司 Pre-training model processing method, pre-training model processing device, downstream task processing device and storage medium
CN112084770B (en) * 2020-09-14 2024-07-05 深圳前海微众银行股份有限公司 Word slot filling method, device and readable storage medium
CN112214998B (en) * 2020-11-16 2023-08-22 中国平安财产保险股份有限公司 Method, device, equipment and storage medium for joint identification of intention and entity
CN112507712B (en) * 2020-12-11 2024-01-26 北京百度网讯科技有限公司 Method and device for establishing slot identification model and slot identification
CN112883180A (en) * 2021-02-24 2021-06-01 挂号网(杭州)科技有限公司 Model training method and device, electronic equipment and storage medium
CN112926313B (en) * 2021-03-10 2023-08-15 新华智云科技有限公司 Method and system for extracting slot position information
CN113326360B (en) * 2021-04-25 2022-12-13 哈尔滨工业大学 Natural language understanding method in small sample scene
CN113139816B (en) * 2021-04-26 2024-07-16 北京沃东天骏信息技术有限公司 Information processing method, apparatus, electronic device and storage medium
CN113378970B (en) * 2021-06-28 2023-08-22 山东浪潮成方数字服务有限公司 Sentence similarity detection method and device, electronic equipment and storage medium
CN114444462B (en) * 2022-01-26 2022-11-29 北京百度网讯科技有限公司 Model training method and man-machine interaction method and device
US20230367966A1 (en) * 2022-05-11 2023-11-16 Robert Bosch Gmbh Development platform for facilitating the optimization of natural-language-understanding systems
CN117574878B (en) * 2024-01-15 2024-05-17 西湖大学 Component syntactic analysis method, device and medium for mixed field
CN117709394A (en) * 2024-02-06 2024-03-15 华侨大学 Vehicle track prediction model training method, multi-model migration prediction method and device

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11250218B2 (en) * 2015-12-11 2022-02-15 Microsoft Technology Licensing, Llc Personalizing natural language understanding systems
CN106156003B (en) * 2016-06-30 2018-08-28 北京大学 A kind of question sentence understanding method in question answering system
CN107341146B (en) * 2017-06-23 2020-08-04 上海交大知识产权管理有限公司 Migratable spoken language semantic analysis system based on semantic groove internal structure and implementation method thereof
CN107330120B (en) * 2017-07-14 2018-09-18 三角兽(北京)科技有限公司 Inquire answer method, inquiry answering device and computer readable storage medium
CN107688614B (en) * 2017-08-04 2018-08-10 平安科技(深圳)有限公司 It is intended to acquisition methods, electronic device and computer readable storage medium
CN108305612B (en) * 2017-11-21 2020-07-31 腾讯科技(深圳)有限公司 Text processing method, text processing device, model training method, model training device, storage medium and computer equipment
CN107832476B (en) * 2017-12-01 2020-06-05 北京百度网讯科技有限公司 Method, device, equipment and storage medium for understanding search sequence
CN108021660B (en) * 2017-12-04 2020-05-22 中国人民解放军国防科技大学 Topic self-adaptive microblog emotion analysis method based on transfer learning
CN108197167A (en) * 2017-12-18 2018-06-22 深圳前海微众银行股份有限公司 Human-computer dialogue processing method, equipment and readable storage medium storing program for executing
CN108182264B (en) * 2018-01-09 2022-04-01 武汉大学 Ranking recommendation method based on cross-domain ranking recommendation model
CN108334496B (en) * 2018-01-30 2020-06-12 中国科学院自动化研究所 Man-machine conversation understanding method and system for specific field and related equipment
CN108681585A (en) * 2018-05-14 2018-10-19 浙江工业大学 A kind of construction method of the multi-source transfer learning label popularity prediction model based on NetSim-TL
CN108874779B (en) * 2018-06-21 2021-09-21 东北大学 Control method of graph-based poetry writing system established based on K8s cluster

Also Published As

Publication number Publication date
WO2020107765A1 (en) 2020-06-04
CN109597993A (en) 2019-04-09

Similar Documents

Publication Publication Date Title
CN109597993B (en) Statement analysis processing method, device, equipment and computer readable storage medium
JP6894534B2 (en) Information processing method and terminal, computer storage medium
CN109284399B (en) Similarity prediction model training method and device and computer readable storage medium
CN106406806B (en) Control method and device for intelligent equipment
CN109471945B (en) Deep learning-based medical text classification method and device and storage medium
CN109766840B (en) Facial expression recognition method, device, terminal and storage medium
CN109471915B (en) Text evaluation method, device and equipment and readable storage medium
CN111666416B (en) Method and device for generating semantic matching model
CN111090987A (en) Method and apparatus for outputting information
CN111738010B (en) Method and device for generating semantic matching model
CN111931859B (en) Multi-label image recognition method and device
CN111444850B (en) Picture detection method and related device
CN112364829B (en) Face recognition method, device, equipment and storage medium
CN113360660B (en) Text category recognition method, device, electronic equipment and storage medium
CN111368045A (en) User intention identification method, device, equipment and computer readable storage medium
CN112163074A (en) User intention identification method and device, readable storage medium and electronic equipment
CN117972118A (en) Target retrieval method, target retrieval device, electronic equipment and storage medium
CN114090792A (en) Document relation extraction method based on comparison learning and related equipment thereof
CN112417095B (en) Voice message processing method and device
CN113220854A (en) Intelligent dialogue method and device for machine reading understanding
CN118097534A (en) Park security video monitoring method, device, equipment and storage medium
CN117520497A (en) Large model interaction processing method, system, terminal, equipment and medium
CN112069786A (en) Text information processing method and device, electronic equipment and medium
CN111026849B (en) Data processing method and device
CN116643814A (en) Model library construction method, model calling method based on model library and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant