CN114021568A - Model fusion method, system, electronic device and medium - Google Patents
Model fusion method, system, electronic device and medium Download PDFInfo
- Publication number
- CN114021568A CN114021568A CN202111293396.XA CN202111293396A CN114021568A CN 114021568 A CN114021568 A CN 114021568A CN 202111293396 A CN202111293396 A CN 202111293396A CN 114021568 A CN114021568 A CN 114021568A
- Authority
- CN
- China
- Prior art keywords
- entity
- result set
- labeling result
- entity labeling
- selecting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
- G06F40/295—Named entity recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/36—Creation of semantic tools, e.g. ontology or thesauri
- G06F16/367—Ontology
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a model fusion method, a system, an electronic device and a medium, wherein the model fusion method comprises the following steps: performing multiple rounds of training on the model to obtain a probability vector of the training model, processing the probability vector to obtain an average probability vector, and performing entity labeling on the average probability vector to obtain a second entity labeling result set; after distinguishing a new entity from an old entity according to the first entity labeling result set, selecting a new entity labeling result set and an old entity labeling result set corresponding to the new entity and the old entity from the second entity labeling result set; after a first credibility threshold and a second credibility threshold are preset, selecting a third entity marking result set from the second entity marking result set according to the first credibility threshold, and selecting a fourth entity marking result set from the second entity marking result set according to the second credibility threshold; and selecting the third entity labeling result set and the fourth entity labeling result set according to the total number of the training models to obtain a final entity labeling result.
Description
Technical Field
The present application relates to the field of knowledge graph technology, and in particular, to a model fusion method, system, electronic device, and medium.
Background
In the field of machine learning, the same problem can be solved by using multiple models with different parameters and different structures, and a method for integrating different models into a robust model is needed. There is also a need to ensure that the integrated model is superior to the underlying submodel. The current model fusion methods include bagging (bootstrapping aggregation), Boosting and Stacking. Bagging is a mode that N models are adopted for predicting voting in a classification problem, and a mode that N models are adopted for predicting average in a regression problem; boosting is to assign equal weight to each training example when training is started, then train t rounds to the training set by using the algorithm, and after each training, assign larger weight to the training examples which fail in training, namely, the learning algorithm is enabled to pay more attention to the wrong samples after each learning, so as to obtain a plurality of prediction functions; the Stacking is to first train a plurality of different models, and then train a model by taking the outputs of the previously trained models as inputs to obtain a final output.
The existing model fusion method does not consider the characteristics of the solved task and the structural characteristics of the processed data. Therefore, how to effectively improve the model fusion effect for a certain type of specific tasks becomes a problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a model fusion method, a model fusion system, electronic equipment and a medium, and at least solves the problems that the pertinence of a specific task in a model fusion process is reduced due to neglect of task characteristics when the specific task is aimed at in the model fusion process.
The invention provides a model fusion method, which comprises the following steps:
and a step of acquiring a labeling result set: performing multiple rounds of training on a model to obtain a probability vector of a training model, processing the probability vector to obtain an average probability vector, and performing entity labeling on the average probability vector to obtain a second entity labeling result set;
and an entity judgment step: after distinguishing a new entity from an old entity according to a first entity labeling result set, selecting a new entity labeling result set and an old entity labeling result set corresponding to the new entity and the old entity from a second entity labeling result set;
reliability presetting step: after a first credibility threshold and a second credibility threshold are preset, selecting a third entity marking result set from the second entity marking result set according to the first credibility threshold, and selecting a fourth entity marking result set from the second entity marking result set according to the second credibility threshold;
and a final entity labeling result obtaining step: and selecting the third entity labeling result set and the fourth entity labeling result set according to the total number of the training models to obtain a final entity labeling result.
In the above model fusion method, the step of obtaining the labeling result set includes:
after multiple rounds of model training are carried out on the model to obtain a plurality of training models and a plurality of probability vectors output by an output layer of the training models, averaging the probability vectors to obtain a plurality of average probability vectors;
and after entity labeling is carried out on the average probability vectors by an Argmax method, summarizing the labeling results to obtain the second entity labeling result set.
In the above model fusion method, the entity determining step includes:
judging the second entity labeling result set according to the first entity labeling result set to obtain a judgment result;
and according to the judgment result, if the labeled entity in the second entity labeling result set appears in the first entity labeling result set, the labeled entity is the old entity, and the old entity labeling result set corresponding to the old entity is selected from the second entity labeling result set.
In the above model fusion method, the entity determining step further includes:
and if the labeled entity in the second entity labeling result set does not appear in the first entity labeling result set, the labeled entity is the new entity, and the new entity labeling result set corresponding to the new entity is selected from the second entity labeling result set.
In the above model fusion method, the reliability presetting step includes:
presetting the first credibility threshold of the old entity and the second credibility threshold of the new entity;
and selecting the old entity labeling result set with the frequency of occurrence of the second entity labeling result set larger than the first credibility threshold value to obtain the third entity labeling result set.
In the above model fusion method, the reliability presetting step further includes:
and selecting the new entity labeling result set with the frequency of occurrence of the second entity labeling result set larger than the second credibility threshold value to obtain the fourth entity labeling result set.
In the above model fusion method, the step of obtaining the final entity labeling result includes:
and selecting the third entity labeling result set and the fourth entity labeling result set which are less than the total number of the training models and more than half of the total number of the training models to obtain the final entity labeling result.
The present invention also provides a model fusion system, wherein the model fusion system is suitable for the model fusion method, and the model fusion system comprises:
a labeling result set acquisition unit: performing multiple rounds of training on a model to obtain a probability vector of a training model, processing the probability vector to obtain an average probability vector, and performing entity labeling on the average probability vector to obtain a second entity labeling result set;
an entity judgment unit: after distinguishing a new entity from an old entity according to a first entity labeling result set, selecting a new entity labeling result set and an old entity labeling result set corresponding to the new entity and the old entity from a second entity labeling result set;
a reliability presetting unit: presetting a first credibility threshold and a second credibility threshold, selecting a third entity marking result set from the second entity marking result set according to the first credibility threshold, and then selecting a fourth entity marking result set from the second entity marking result set according to the second credibility threshold;
a final entity labeling result obtaining unit: and selecting the third entity labeling result set and the fourth entity labeling result set according to the total number of the training models to obtain a final entity labeling result.
The present invention also provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements any one of the model fusion methods described above when executing the computer program.
The present invention also provides an electronic device readable storage medium having stored thereon computer program instructions, which, when executed by the processor, implement any of the model fusion methods described above.
Compared with the prior art, the model fusion method, the system, the electronic equipment and the medium provided by the invention have the advantages that through a two-stage model fusion mode aiming at an entity recognition task, after multi-round model training is carried out on a pair of submodels in stages, all training process results of the submodels and the labeling capacity of the submodels are fully utilized, and the accuracy of named entity recognition is improved; after the second stage judges whether the entity marked in the first stage is a new entity or an old entity through the manually marked existing entity marking result set, different credibility thresholds are set according to the new entity and the old entity, namely different fusion strategies are adopted, the pertinence of the model fusion process to specific tasks is increased, and the map construction capability is improved.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a flow diagram of a model fusion method according to an embodiment of the present application;
FIG. 2 is a flow diagram of model fusion according to an embodiment of the present application;
FIG. 3 is a schematic diagram of the model fusion system of the present invention;
fig. 4 is a frame diagram of an electronic device according to an embodiment of the present application.
Wherein the reference numerals are:
a labeling result set acquisition unit: 51;
an entity judgment unit: 52;
a reliability presetting unit: 53;
a final entity labeling result obtaining unit: 54, a first electrode;
80 parts of a bus;
a processor: 81;
a memory: 82;
a communication interface: 83.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that such a development effort might be complex and tedious, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as a limitation of this disclosure.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The invention starts from the characteristics of the named entity recognition task, and increases the understanding and the utilization of the task characteristics in the model fusion process by a two-stage model fusion mode aiming at the entity recognition task.
The present invention will be described with reference to specific examples.
Example one
The present embodiment provides a model fusion method. Referring to fig. 1 to 2, fig. 1 is a flowchart of a model fusion method according to an embodiment of the present application; fig. 2 is a flowchart of model fusion according to an embodiment of the present application, and as shown in fig. 1 to 2, the model fusion method includes the following steps:
labeling result set acquisition step S1: performing multiple rounds of training on the model to obtain a probability vector of the training model, processing the probability vector to obtain an average probability vector, and performing entity labeling on the average probability vector to obtain a second entity labeling result set;
entity determination step S2: after distinguishing a new entity from an old entity according to the first entity labeling result set, selecting a new entity labeling result set and an old entity labeling result set corresponding to the new entity and the old entity from the second entity labeling result set;
confidence level presetting step S3: after a first credibility threshold and a second credibility threshold are preset, selecting a third entity marking result set from the second entity marking result set according to the first credibility threshold, and selecting a fourth entity marking result set from the second entity marking result set according to the second credibility threshold;
a final entity labeling result obtaining step S4: and selecting the third entity labeling result set and the fourth entity labeling result set according to the total number of the training models to obtain a final entity labeling result.
In an embodiment, the annotation result set obtaining step S1 includes:
after multiple rounds of model training are carried out on the model to obtain a plurality of training models and a plurality of probability vectors output by an output layer of the training models, averaging the plurality of probability vectors to obtain a plurality of average probability vectors;
and after entity labeling is carried out on the average probability vectors by an Argmax method, summarizing the labeling results to obtain a second entity labeling result set.
In the specific implementation, in the first stage, after multiple rounds of Model training are carried out on the sub-models to obtain multiple training models (Model _ i) and multiple probability vectors { logits i.x } output by an output layer of the training models, the multiple probability vectors { logits i.x } are averaged to obtain multiple average probability vectors; and after entity labeling is carried out on the multiple average probability vectors by an Argmax method, summarizing the labeling results to obtain a second entity labeling Result set { Result _ i }.
In an embodiment, the entity determination step S2 includes:
judging a second entity labeling result set according to the first entity labeling result set to obtain a judgment result;
and according to the judgment result, if the labeled entity in the second entity labeling result set appears in the first entity labeling result set, the labeled entity is an old entity, and an old entity labeling result set corresponding to the old entity is selected from the second entity labeling result set.
In specific implementation, in the second stage, according to the first entity labeling Result set (the manually labeled existing entity labeling Result set), judging a second entity labeling Result set { Result _ i }; according to the judgment Result, if the marked entity in the second entity marking Result set { Result _ i } appears in the first entity marking Result set, the marked entity is an Old entity (Old Entities), and an Old entity marking Result set corresponding to the Old entity is selected from the second entity marking Result set { Result _ i }; and if the marked entity in the second entity marking Result set { Result _ i } does not appear in the first entity marking Result set, the marked entity is a New entity (New Entities), and a New entity marking Result set corresponding to the New entity is selected from the second entity marking Result set { Result _ i }.
In an embodiment, the confidence level presetting step S3 includes:
presetting a first credibility threshold of an old entity and a second credibility threshold of a new entity;
selecting an old entity labeling result set with the occurrence frequency of the second entity labeling result set larger than a first credibility threshold value to obtain a third entity labeling result set; and selecting a new entity labeling result set with the frequency of occurrence of the second entity labeling result set larger than a second credibility threshold value to obtain a fourth entity labeling result set.
In a specific implementation, a first credibility threshold (n) of an old entity and a second credibility threshold (m) of a new entity are preset, and the first credibility threshold of the old entity is greater than the second threshold of the new entity because the credibility of the old entity is naturally greater than the credibility of the new entity, namely n > m; selecting an old entity labeling Result set with the frequency greater than a first credibility threshold (n) from the second entity labeling Result set { Result _ i }, and obtaining a third entity labeling Result set; and selecting a new entity labeling Result set with the frequency greater than a second credibility threshold (m) in the second entity labeling Result set { Result _ i }, and obtaining a fourth entity labeling Result set.
In an embodiment, the final entity labeling result obtaining step S4 includes:
and selecting a third entity labeling Result set and a fourth entity labeling Result set which are less than the total number of the training models and more than half of the total number of the training models to obtain a final entity labeling Result (Submit Result). In order to avoid the contradiction condition that the two labeling modes simultaneously occur for the new entity, the first reliability threshold and the second reliability threshold which are set for increasing the reliability of the model result are both larger than half (k/2) of the total number of the training models, namely k > n > m > k/2.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a model fusion system according to the present invention. As shown in fig. 3, the model fusion system of the present invention is suitable for the above model fusion method, and includes:
the labeling result set obtaining unit 51: performing multiple rounds of training on the model to obtain a probability vector of the training model, processing the probability vector to obtain an average probability vector, and performing entity labeling on the average probability vector to obtain a second entity labeling result set;
the entity judgment unit 52: after distinguishing a new entity from an old entity according to a first entity labeling result set, selecting a new entity labeling result set and an old entity labeling result set corresponding to the new entity and the old entity from a second entity labeling result set;
reliability presetting unit 53: presetting a first credibility threshold and a second credibility threshold, selecting a third entity marking result set from the second entity marking result set according to the first credibility threshold, and then selecting a fourth entity marking result set from the second entity marking result set according to the second credibility threshold;
the final entity labeling result obtaining unit 54: and selecting the third entity labeling result set and the fourth entity labeling result set according to the total number of the training models to obtain a final entity labeling result.
EXAMPLE III
Referring to fig. 4, this embodiment discloses a specific implementation of an electronic device. The electronic device may include a processor 81 and a memory 82 storing computer program instructions.
Specifically, the processor 81 may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
The memory 82 may be used to store or cache various data files for processing and/or communication use, as well as possible computer program instructions executed by the processor 81.
The processor 81 implements the model fusion method in the above-described embodiments by reading and executing computer program instructions stored in the memory 82.
In some of these embodiments, the electronic device may also include a communication interface 83 and a bus 80. As shown in fig. 4, the processor 81, the memory 82, and the communication interface 83 are connected via the bus 80 to complete communication therebetween.
The communication interface 83 is used for implementing communication between modules, devices, units and/or equipment in the embodiment of the present application. The communication port 83 may also be implemented with other components such as: and data communication is carried out among external equipment, image/abnormal data monitoring equipment, a database, external storage, an image/abnormal data monitoring workstation and the like.
The bus 80 includes hardware, software, or both to couple the components of the electronic device to one another. Bus 80 includes, but is not limited to, at least one of the following: data Bus (Data Bus), Address Bus (Address Bus), Control Bus (Control Bus), Expansion Bus (Expansion Bus), and Local Bus (Local Bus). By way of example, and not limitation, Bus 80 may include an Accelerated Graphics Port (AGP) or other Graphics Bus, an Enhanced Industry Standard Architecture (EISA) Bus, a Front-Side Bus (FSB), a Hyper Transport (HT) Interconnect, an ISA (ISA) Bus, an InfiniBand (InfiniBand) Interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a microchannel Architecture (MCA) Bus, a PCI (Peripheral Component Interconnect) Bus, a PCI-Express (PCI-X) Bus, a Serial Advanced Technology Attachment (SATA) Bus, a Video Electronics Bus (audio Electronics Association), abbreviated VLB) bus or other suitable bus or a combination of two or more of these. Bus 80 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
The electronic device may be connected to a model fusion system to implement the method in conjunction with fig. 1-2.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
In conclusion, the method and the device have the advantages that the two-stage model fusion mode aiming at the entity recognition task is adopted, all training process results of the sub-models and the labeling capacity of the sub-models are fully utilized in the first stage, and the accuracy of named entity recognition is improved. And according to the new entity and the old entity marked in the first stage, different fusion strategies are adopted for the new entity and the old entity, so that the pertinence of the model fusion process to specific tasks is increased.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent application shall be subject to the protection scope of the appended claims.
Claims (10)
1. A model fusion method is applied to a scene of two-stage model fusion aiming at an entity recognition task, and comprises the following steps:
and a step of acquiring a labeling result set: performing multiple rounds of training on a model to obtain a probability vector of a training model, processing the probability vector to obtain an average probability vector, and performing entity labeling on the average probability vector to obtain a second entity labeling result set;
and an entity judgment step: after distinguishing a new entity from an old entity according to a first entity labeling result set, selecting a new entity labeling result set and an old entity labeling result set corresponding to the new entity and the old entity from a second entity labeling result set;
reliability presetting step: after a first credibility threshold and a second credibility threshold are preset, selecting a third entity marking result set from the second entity marking result set according to the first credibility threshold, and selecting a fourth entity marking result set from the second entity marking result set according to the second credibility threshold;
and a final entity labeling result obtaining step: and selecting the third entity labeling result set and the fourth entity labeling result set according to the total number of the training models to obtain a final entity labeling result.
2. The model fusion method of claim 1, wherein the annotation result set obtaining step comprises:
after multiple rounds of model training are carried out on the model to obtain a plurality of training models and a plurality of probability vectors output by an output layer of the training models, averaging the probability vectors to obtain a plurality of average probability vectors;
and after entity labeling is carried out on the average probability vectors by an Argmax method, summarizing the labeling results to obtain the second entity labeling result set.
3. The model fusion method of claim 1, wherein the entity determining step comprises:
judging the second entity labeling result set according to the first entity labeling result set to obtain a judgment result;
and according to the judgment result, if the labeled entity in the second entity labeling result set appears in the first entity labeling result set, the labeled entity is the old entity, and the old entity labeling result set corresponding to the old entity is selected from the second entity labeling result set.
4. The model fusion method of claim 3, wherein the entity determining step further comprises:
and if the labeled entity in the second entity labeling result set does not appear in the first entity labeling result set, the labeled entity is the new entity, and the new entity labeling result set corresponding to the new entity is selected from the second entity labeling result set.
5. The model fusion method according to claim 4, characterized in that the confidence level presetting step includes:
presetting the first credibility threshold of the old entity and the second credibility threshold of the new entity;
and selecting the old entity labeling result set with the frequency of occurrence of the second entity labeling result set larger than the first credibility threshold value to obtain the third entity labeling result set.
6. The model fusion method of claim 4, wherein the confidence level presetting step further comprises:
and selecting the new entity labeling result set with the frequency of occurrence of the second entity labeling result set larger than the second credibility threshold value to obtain the fourth entity labeling result set.
7. The model fusion method of claim 1, wherein the step of obtaining the final entity labeling result comprises:
and selecting the third entity labeling result set and the fourth entity labeling result set which are less than the total number of the training models and more than half of the total number of the training models to obtain the final entity labeling result.
8. A model fusion system, comprising:
a labeling result set acquisition unit: performing multiple rounds of training on a model to obtain a probability vector of a training model, processing the probability vector to obtain an average probability vector, and performing entity labeling on the average probability vector to obtain a second entity labeling result set;
an entity judgment unit: after distinguishing a new entity from an old entity according to a first entity labeling result set, selecting a new entity labeling result set and an old entity labeling result set corresponding to the new entity and the old entity from a second entity labeling result set;
a reliability presetting unit: presetting a first credibility threshold and a second credibility threshold, selecting a third entity marking result set from the second entity marking result set according to the first credibility threshold, and then selecting a fourth entity marking result set from the second entity marking result set according to the second credibility threshold;
a final entity labeling result obtaining unit: and selecting the third entity labeling result set and the fourth entity labeling result set according to the total number of the training models to obtain a final entity labeling result.
9. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the model fusion method of any one of claims 1 to 7 when executing the computer program.
10. An electronic device readable storage medium having stored thereon computer program instructions which, when executed by the processor, implement the model fusion method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111293396.XA CN114021568A (en) | 2021-11-03 | 2021-11-03 | Model fusion method, system, electronic device and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111293396.XA CN114021568A (en) | 2021-11-03 | 2021-11-03 | Model fusion method, system, electronic device and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114021568A true CN114021568A (en) | 2022-02-08 |
Family
ID=80059999
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111293396.XA Pending CN114021568A (en) | 2021-11-03 | 2021-11-03 | Model fusion method, system, electronic device and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114021568A (en) |
-
2021
- 2021-11-03 CN CN202111293396.XA patent/CN114021568A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019051941A1 (en) | Method, apparatus and device for identifying vehicle type, and computer-readable storage medium | |
JP2019528502A (en) | Method and apparatus for optimizing a model applicable to pattern recognition and terminal device | |
EP3633553A1 (en) | Method, device and apparatus for training object detection model | |
CN111641621B (en) | Internet of things security event identification method and device and computer equipment | |
US11210502B2 (en) | Comparison method and apparatus based on a plurality of face image frames and electronic device | |
CN111027412B (en) | Human body key point identification method and device and electronic equipment | |
CN111968625A (en) | Sensitive audio recognition model training method and recognition method fusing text information | |
CN113780466A (en) | Model iterative optimization method and device, electronic equipment and readable storage medium | |
CN113608916A (en) | Fault diagnosis method and device, electronic equipment and storage medium | |
CN110032931B (en) | Method and device for generating countermeasure network training and removing reticulation and electronic equipment | |
CN113569705B (en) | Scene segmentation point judging method, system, storage medium and electronic equipment | |
CN111373436A (en) | Image processing method, terminal device and storage medium | |
CN117092525B (en) | Training method and device for battery thermal runaway early warning model and electronic equipment | |
CN113743277A (en) | Method, system, equipment and storage medium for short video frequency classification | |
CN113569704B (en) | Segmentation point judging method, system, storage medium and electronic equipment | |
CN114021568A (en) | Model fusion method, system, electronic device and medium | |
CN108364026A (en) | A kind of cluster heart update method, device and K-means clustering methods, device | |
CN114187502A (en) | Vehicle loading rate identification method and device, electronic equipment and storage medium | |
CN113569703B (en) | Real division point judging method, system, storage medium and electronic equipment | |
CN114237981A (en) | Data recovery method, device, equipment and storage medium | |
CN111708908B (en) | Video tag adding method and device, electronic equipment and computer readable storage medium | |
CN111694588B (en) | Engine upgrade detection method and device, computer equipment and readable storage medium | |
CN113987173A (en) | Short text classification method, system, electronic device and medium | |
CN113468879A (en) | Method, system, electronic device and medium for judging unknown words | |
CN112560970A (en) | Abnormal picture detection method, system, equipment and storage medium based on self-coding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |