CN110826342A - Method, device, computer storage medium and terminal for realizing model management - Google Patents

Method, device, computer storage medium and terminal for realizing model management Download PDF

Info

Publication number
CN110826342A
CN110826342A CN201911038807.3A CN201911038807A CN110826342A CN 110826342 A CN110826342 A CN 110826342A CN 201911038807 A CN201911038807 A CN 201911038807A CN 110826342 A CN110826342 A CN 110826342A
Authority
CN
China
Prior art keywords
model
file
text
predicted
predicting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201911038807.3A
Other languages
Chinese (zh)
Inventor
丁中正
袁灿
于政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Mininglamp Software System Co ltd
Original Assignee
Beijing Mininglamp Software System Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Mininglamp Software System Co ltd filed Critical Beijing Mininglamp Software System Co ltd
Priority to CN201911038807.3A priority Critical patent/CN110826342A/en
Publication of CN110826342A publication Critical patent/CN110826342A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/16File or folder operations, e.g. details of user interfaces specifically adapted to file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Machine Translation (AREA)

Abstract

A method, a device, a computer storage medium and a terminal for realizing model management comprise: storing each existing model file according to a preset strategy; selecting a model file for predicting the text to be predicted from the stored model files; and carrying out prediction processing on the text to be predicted by loading the selected model file. According to the embodiment of the invention, the processing efficiency of the natural language processing on the tasks is improved by overall calling of the model files.

Description

Method, device, computer storage medium and terminal for realizing model management
Technical Field
The present disclosure relates to, but not limited to, artificial intelligence technology, and more particularly, to a method, an apparatus, a computer storage medium, and a terminal for implementing model management.
Background
Natural language processing is an important direction in the field of artificial intelligence, and is a technology for researching computer processing human language. Unlike the general machine learning process, a large part of the natural language processing is focused on corpus preprocessing including word segmentation, part-of-speech tagging, stop-word removal, and the like. Since there is no obvious space mark between Chinese text words, sentences appear in the form of character strings, and the corpus exists in different forms such as short text or long text, pre-processing such as word segmentation and text labeling occupies a large proportion of natural language processing of Chinese text.
The tasks of natural language processing comprise basic tasks and high-level tasks; wherein, basic task includes: word vectors, syntactic analysis, Chinese word segmentation, entity recognition, keyword extraction and the like; the high-level tasks comprise tasks directly oriented to business processing, including automatic summarization, text classification, text clustering, emotion analysis and the like. Both basic tasks and high-level tasks generally need to be realized through a model based on a machine learning method, wherein the input of the model is a word vector obtained by converting natural language text data, and the output of the model is the target of the natural language task.
In the related art, the tasks of natural language processing are performed in a single form, that is, each task of natural language processing needs to be implemented by a corresponding task model, which results in low natural language processing efficiency, and how to improve the processing efficiency of natural language processing on the tasks becomes a problem to be solved.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
The embodiment of the invention provides a method, a device, a computer storage medium and a terminal for realizing model management, which can improve the processing efficiency of natural language processing on tasks.
The embodiment of the invention provides a method for realizing model management, which comprises the following steps:
storing each existing model file according to a preset strategy;
selecting a model file for predicting the text to be predicted from the stored model files;
and carrying out prediction processing on the text to be predicted by loading the selected model file.
In an exemplary embodiment, the selecting a model file for predicting a text to be predicted from stored model files includes:
determining whether the stored model files contain model files which can be used for predicting the text to be predicted or not according to the model related information of each model file;
selecting the determined model file which can be used for predicting the text to be predicted as a model file for predicting the text to be predicted;
wherein the model-related information includes information of one or any combination of the following: name, category, and description information.
In an exemplary embodiment, when it is determined that the stored model file does not include a model file that can be used for predicting the text to be predicted, the method further includes:
reading a preset training file;
combining the read training file with a pre-configured machine learning algorithm, and training to obtain a model file;
wherein the training file comprises: for generating a model file that can be used for predicting the text to be predicted.
In an exemplary embodiment, the storing existing model files according to the preset policy includes:
setting corresponding file directories for the existing model files according to the types of the model files;
and storing each model file according to the file directory set for each model file.
On the other hand, an embodiment of the present invention further provides a device for implementing model management, including: a storage unit, a selection unit and a prediction unit; wherein the content of the first and second substances,
the storage unit is used for: storing each existing model file according to a preset strategy;
the selection unit is used for: selecting a model file for predicting the text to be predicted from the stored model files;
the prediction unit is to: and carrying out prediction processing on the text to be predicted by loading the selected model file.
In an exemplary embodiment, the selecting unit is specifically configured to: determining whether the stored model files contain model files which can be used for predicting the text to be predicted or not according to the model related information of each model file;
selecting the determined model file which can be used for predicting the text to be predicted as a model file for predicting the text to be predicted;
wherein the model-related information includes information of one or any combination of the following: name, category, and description information.
In an exemplary embodiment, the apparatus further comprises a training unit for:
reading a preset training file; combining the read training file with a pre-configured machine learning algorithm, and training to obtain a model file;
wherein the training file comprises: for generating a model file that can be used for predicting the text to be predicted.
In an exemplary embodiment, the storage unit is specifically configured to:
setting corresponding file directories for the existing model files according to the types of the model files;
and storing each model file according to the file directory set for each model file.
In still another aspect, an embodiment of the present invention further provides a computer storage medium, where a computer program is stored in the computer storage medium, and when the computer program is executed by a processor, the method for implementing model management is implemented.
In another aspect, an embodiment of the present invention further provides a terminal, including: a memory and a processor, the memory having a computer program stored therein; wherein the content of the first and second substances,
the processor is configured to execute the computer program in the memory;
the computer program, when executed by the processor, implements a method of implementing model management as described above.
Compared with the related art, the technical scheme of the application comprises the following steps: storing each existing model file according to a preset strategy; selecting a model file for predicting the text to be predicted from the stored model files; and carrying out prediction processing on the text to be predicted by loading the selected model file. According to the embodiment of the invention, the processing efficiency of the natural language processing on the tasks is improved by overall calling of the model files.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the example serve to explain the principles of the invention and not to limit the invention.
FIG. 1 is a flow chart of a method for implementing model management according to an embodiment of the present invention;
fig. 2 is a block diagram of an apparatus for implementing model management according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
The steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
Fig. 1 is a flowchart of a method for implementing model management according to an embodiment of the present invention, as shown in fig. 1, including:
step 101, storing existing model files according to a preset strategy;
in an exemplary embodiment, the storing existing model files according to the preset policy includes:
setting corresponding file directories for the existing model files according to the types of the model files;
and storing each model file according to the file directory set for each model file.
102, selecting a model file for predicting a text to be predicted from stored model files;
in an exemplary embodiment, the selecting a model file for predicting a text to be predicted from stored model files includes:
determining whether the stored model files contain model files which can be used for predicting the text to be predicted or not according to the model related information of each model file;
selecting the determined model file which can be used for predicting the text to be predicted as a model file for predicting the text to be predicted;
wherein the model-related information includes information of one or any combination of the following: name, category, and description information.
In an exemplary embodiment, before determining whether the stored model file contains a model file that can be used for predicting the text to be predicted, the method further includes:
and obtaining the model related information of each model file.
In an exemplary embodiment, the model-related information may further include information of one or any combination of the following: store path, version, and creation time.
The names of the model files are different; the category of the model file may be used to identify task targets for the schema file, such as syntactic analysis, Chinese word segmentation, text classification, and the like;
in an exemplary embodiment, embodiments of the present invention may record model-related information through metadata.
And 103, carrying out prediction processing on the text to be predicted by loading the selected model file.
In an exemplary embodiment, when it is determined that the stored model file does not include a model file that can be used for predicting the text to be predicted, the method further includes:
reading a preset training file;
combining the read training file with a pre-configured machine learning algorithm, and training to obtain a model file;
wherein the training file comprises: for generating a model file that can be used for predicting the text to be predicted.
It should be noted that, when training a model file, the embodiment of the present invention configures what kind of machine learning algorithm may be selected and configured by a person skilled in the art according to the function of the model file that needs to be generated; further, after a model file is generated from a training file, the generated model file becomes an existing model file.
In an exemplary embodiment, the embodiment of the present invention may perform addition and deletion processing on the model file according to task requirements of natural language processing.
In an exemplary embodiment, an interaction module may be provided for the training files and the model files, and is used for uploading the training files and the model files; a setting and editing module: the system is used for performing addition and deletion processing on the model file;
the generation of the model file and the prediction processing process of the text to be predicted in the embodiment of the invention can comprise the existing processing process of the related technology; when generating a model file and performing prediction processing, if text labeling is needed, labeling a training file contained in the training file when generating the model file, including but not limited to: part-of-speech tagging, sequence tagging, and the like. Taking named entity recognition as an example, the training text needs to be labeled with a sequence in advance to form a training data set. When prediction processing is performed, labeling is performed on the text to be predicted, including but not limited to: part-of-speech tagging, sequence tagging, and the like.
In the embodiment of the invention, the process of generating the model file comprises the following steps:
preprocessing the training text, wherein the preprocessing can comprise one or any combination of the following processing: stop words, Chinese word segmentation and data format conversion;
converting the preprocessed training text into word vectors;
and inputting the training text expressed by the word vector into a corresponding model trainer, and training by combining a pre-configured machine learning algorithm to obtain a model file. Taking named entity recognition as an example, the training text is preprocessed by Chinese word segmentation, format conversion and the like, and then the preprocessed training text is converted into word vectors. The training process is to take the word vector as the input of the model trainer, take the labeling result as the model output, continuously update and iterate by using the corresponding machine learning algorithm, terminate the training when the iteration condition is reached, and store the trained model file into the file directory classified as named entity identification.
The prediction processing process of the text to be predicted comprises the following steps:
preprocessing the text to be predicted, wherein the preprocessing can comprise one or any combination of the following processing: stop words, Chinese word segmentation and data format conversion;
and performing word vector conversion on the preprocessed predicted text.
And reading the corresponding model file according to the current task, and obtaining a training model according to the read model file.
And inputting the text to be predicted expressed by the word vector into the training model to obtain a prediction result.
Compared with the related art, the technical scheme of the application comprises the following steps: storing each existing model file according to a preset strategy; selecting a model file for predicting the text to be predicted from the stored model files; and carrying out prediction processing on the text to be predicted by loading the selected model file. According to the embodiment of the invention, the processing efficiency of the natural language processing on the tasks is improved by overall calling of the model files.
Fig. 2 is a block diagram of an apparatus for implementing model management according to an embodiment of the present invention, as shown in fig. 2, including: a storage unit, a selection unit and a prediction unit; wherein the content of the first and second substances,
the storage unit is used for: storing each existing model file according to a preset strategy;
in an exemplary embodiment, the storage unit is specifically configured to:
setting corresponding file directories for the existing model files according to the types of the model files;
and storing each model file according to the file directory set for each model file.
The selection unit is used for: selecting a model file for predicting the text to be predicted from the stored model files;
in an exemplary embodiment, the selecting unit is specifically configured to: determining whether the stored model files contain model files which can be used for predicting the text to be predicted or not according to the model related information of each model file;
selecting the determined model file which can be used for predicting the text to be predicted as a model file for predicting the text to be predicted;
wherein the model-related information includes information of one or any combination of the following: name, category, and description information.
In an exemplary embodiment, an embodiment of the present invention further includes an obtaining unit, configured to:
and obtaining the model related information of each model file.
In an exemplary embodiment, the model-related information may further include information of one or any combination of the following: store path, version, and creation time.
The prediction unit is to: and carrying out prediction processing on the text to be predicted by loading the selected model file.
In an exemplary embodiment, the apparatus further comprises a training unit for:
reading a preset training file; combining the read training file with a pre-configured machine learning algorithm, and training to obtain a model file;
wherein the training file comprises: for generating a model file that can be used for predicting the text to be predicted.
After a model file is generated from a training file, the generated model file becomes an existing model file.
Compared with the related art, the technical scheme of the application comprises the following steps: storing each existing model file according to a preset strategy; selecting a model file for predicting the text to be predicted from the stored model files; and carrying out prediction processing on the text to be predicted by loading the selected model file. According to the embodiment of the invention, the processing efficiency of the natural language processing on the tasks is improved by overall calling of the model files.
The embodiment of the invention also provides a computer storage medium, wherein a computer program is stored in the computer storage medium, and when being executed by a processor, the computer program realizes the method for realizing the model management.
An embodiment of the present invention further provides a terminal, including: a memory and a processor, the memory having a computer program stored therein; wherein the content of the first and second substances,
the processor is configured to execute the computer program in the memory;
the computer program, when executed by the processor, implements a method of implementing model management as described above.
"one of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.

Claims (10)

1. A method of implementing model management, comprising:
storing each existing model file according to a preset strategy;
selecting a model file for predicting the text to be predicted from the stored model files;
and carrying out prediction processing on the text to be predicted by loading the selected model file.
2. The method of claim 1, wherein selecting the model file from the stored model files for predicting the text to be predicted comprises:
determining whether the stored model files contain model files which can be used for predicting the text to be predicted or not according to the model related information of each model file;
selecting the determined model file which can be used for predicting the text to be predicted as a model file for predicting the text to be predicted;
wherein the model-related information includes information of one or any combination of the following: name, category, and description information.
3. The method according to claim 2, wherein when it is determined that the stored model files do not contain model files that can be used for predicting the text to be predicted, the method further comprises:
reading a preset training file;
combining the read training file with a pre-configured machine learning algorithm, and training to obtain a model file;
wherein the training file comprises: for generating a model file that can be used for predicting the text to be predicted.
4. The method according to any one of claims 1 to 3, wherein the storing existing model files according to a preset policy comprises:
setting corresponding file directories for the existing model files according to the types of the model files;
and storing each model file according to the file directory set for each model file.
5. An apparatus for implementing model management, comprising: a storage unit, a selection unit and a prediction unit; wherein the content of the first and second substances,
the storage unit is used for: storing each existing model file according to a preset strategy;
the selection unit is used for: selecting a model file for predicting the text to be predicted from the stored model files;
the prediction unit is to: and carrying out prediction processing on the text to be predicted by loading the selected model file.
6. The apparatus according to claim 5, wherein the selection unit is specifically configured to: determining whether the stored model files contain model files which can be used for predicting the text to be predicted or not according to the model related information of each model file;
selecting the determined model file which can be used for predicting the text to be predicted as a model file for predicting the text to be predicted;
wherein the model-related information includes information of one or any combination of the following: name, category, and description information.
7. The apparatus of claim 6, further comprising a training unit to:
reading a preset training file; combining the read training file with a pre-configured machine learning algorithm, and training to obtain a model file;
wherein the training file comprises: for generating a model file that can be used for predicting the text to be predicted.
8. The device according to any one of claims 5 to 7, wherein the storage unit is specifically configured to:
setting corresponding file directories for the existing model files according to the types of the model files;
and storing each model file according to the file directory set for each model file.
9. A computer storage medium having stored thereon a computer program which, when executed by a processor, implements a method of implementing model management as claimed in any one of claims 1 to 4.
10. A terminal, comprising: a memory and a processor, the memory having a computer program stored therein;
wherein the content of the first and second substances,
the processor is configured to execute the computer program in the memory;
the computer program, when executed by the processor, implements a method of implementing model management as recited in any of claims 1-4.
CN201911038807.3A 2019-10-29 2019-10-29 Method, device, computer storage medium and terminal for realizing model management Withdrawn CN110826342A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911038807.3A CN110826342A (en) 2019-10-29 2019-10-29 Method, device, computer storage medium and terminal for realizing model management

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911038807.3A CN110826342A (en) 2019-10-29 2019-10-29 Method, device, computer storage medium and terminal for realizing model management

Publications (1)

Publication Number Publication Date
CN110826342A true CN110826342A (en) 2020-02-21

Family

ID=69550967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911038807.3A Withdrawn CN110826342A (en) 2019-10-29 2019-10-29 Method, device, computer storage medium and terminal for realizing model management

Country Status (1)

Country Link
CN (1) CN110826342A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112799804A (en) * 2021-01-15 2021-05-14 北京明略软件系统有限公司 Task management method and system
CN113377715A (en) * 2021-06-24 2021-09-10 北京明朝万达科技股份有限公司 Method and device for intelligently issuing classification model based on nlp and storage medium
CN113407347A (en) * 2021-06-30 2021-09-17 北京百度网讯科技有限公司 Resource scheduling method, device, equipment and computer storage medium
CN113515895A (en) * 2021-07-30 2021-10-19 北京中网易企秀科技有限公司 Cross-platform model prediction method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120040851A (en) * 2010-10-20 2012-04-30 인하대학교 산학협력단 System and method of simulation-based workload prediction model selection for load balancing scehduling on grid biometric authentication system
CN106156186A (en) * 2015-04-21 2016-11-23 阿里巴巴集团控股有限公司 A kind of data model managing device, server and data processing method
CN109165249A (en) * 2018-08-07 2019-01-08 阿里巴巴集团控股有限公司 Data processing model construction method, device, server and user terminal
CN110322093A (en) * 2018-03-30 2019-10-11 阿里巴巴集团控股有限公司 Information processing method, information display method, device and calculating equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120040851A (en) * 2010-10-20 2012-04-30 인하대학교 산학협력단 System and method of simulation-based workload prediction model selection for load balancing scehduling on grid biometric authentication system
CN106156186A (en) * 2015-04-21 2016-11-23 阿里巴巴集团控股有限公司 A kind of data model managing device, server and data processing method
CN110322093A (en) * 2018-03-30 2019-10-11 阿里巴巴集团控股有限公司 Information processing method, information display method, device and calculating equipment
CN109165249A (en) * 2018-08-07 2019-01-08 阿里巴巴集团控股有限公司 Data processing model construction method, device, server and user terminal

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112799804A (en) * 2021-01-15 2021-05-14 北京明略软件系统有限公司 Task management method and system
CN113377715A (en) * 2021-06-24 2021-09-10 北京明朝万达科技股份有限公司 Method and device for intelligently issuing classification model based on nlp and storage medium
CN113407347A (en) * 2021-06-30 2021-09-17 北京百度网讯科技有限公司 Resource scheduling method, device, equipment and computer storage medium
CN113407347B (en) * 2021-06-30 2023-02-24 北京百度网讯科技有限公司 Resource scheduling method, device, equipment and computer storage medium
CN113515895A (en) * 2021-07-30 2021-10-19 北京中网易企秀科技有限公司 Cross-platform model prediction method and device
CN113515895B (en) * 2021-07-30 2024-03-01 北京中网易企秀科技有限公司 Cross-platform model prediction method and device

Similar Documents

Publication Publication Date Title
CN110826342A (en) Method, device, computer storage medium and terminal for realizing model management
CN106874248B (en) Article generation method and device based on artificial intelligence
US20200167420A1 (en) Self-learning user interface with image-processed qa-pair corpus
JP2013541793A (en) Multi-mode search query input method
US20210158093A1 (en) Automatically generating labeled synthetic documents
US20170116521A1 (en) Tag processing method and device
CN110909120B (en) Resume searching/delivering method, device and system and electronic equipment
CN107948730B (en) Method, device and equipment for generating video based on picture and storage medium
CN110705235B (en) Information input method and device for business handling, storage medium and electronic equipment
CN106648442A (en) Metadata node internal memory mirroring method and device
CN112149419B (en) Method, device and system for normalized automatic naming of fields
CN111414735A (en) Text data generation method and device
CN111694520B (en) Method and device for optimizing big data storage
WO2021055868A1 (en) Associating user-provided content items to interest nodes
CN117290481A (en) Question and answer method and device based on deep learning, storage medium and electronic equipment
CN111897828A (en) Data batch processing implementation method, device, equipment and storage medium
CN112818687B (en) Method, device, electronic equipment and storage medium for constructing title recognition model
CN115062106A (en) Code searching method, system and medium based on function multiple graph embedding
CN111401032B (en) Text processing method, device, computer equipment and storage medium
CN110276001B (en) Checking page identification method and device, computing equipment and medium
CN111581270A (en) Data extraction method and device
CN111475641A (en) Data extraction method and device, storage medium and equipment
CN111401005A (en) Text conversion method and device and readable storage medium
CN117057325B (en) Form filling method and system applied to power grid field and electronic equipment
CN113298914B (en) Knowledge chunk extraction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20200221

WW01 Invention patent application withdrawn after publication