CN111967613B - NLP model training and publishing recognition system - Google Patents

NLP model training and publishing recognition system Download PDF

Info

Publication number
CN111967613B
CN111967613B CN202010853842.7A CN202010853842A CN111967613B CN 111967613 B CN111967613 B CN 111967613B CN 202010853842 A CN202010853842 A CN 202010853842A CN 111967613 B CN111967613 B CN 111967613B
Authority
CN
China
Prior art keywords
nlp
service process
training
release
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010853842.7A
Other languages
Chinese (zh)
Other versions
CN111967613A (en
Inventor
陈继扬
王磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Baiying Technology Co Ltd
Original Assignee
Zhejiang Baiying Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Baiying Technology Co Ltd filed Critical Zhejiang Baiying Technology Co Ltd
Priority to CN202010853842.7A priority Critical patent/CN111967613B/en
Publication of CN111967613A publication Critical patent/CN111967613A/en
Application granted granted Critical
Publication of CN111967613B publication Critical patent/CN111967613B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses an NLP model training and publishing recognition system, which comprises: at least two GPU servers; NLP recognition model; NLP language recognition service process; at least one NLP gateway; at least two GPU server resource scheduling instruction executors, wherein the GPU server resource scheduling instruction executors are used for executing scheduling instructions initiated by a resource scheduling center module; the resource scheduling center module is used for distributing GPU server resources and coordinating executors to execute instructions in the following processes, and comprises the following steps: training an NLP recognition model, publishing the NLP recognition model and synchronously changing the relationship between the NLP recognition model and service data; and the service registry module is used for recording and deleting the service information from the registry when the NLP language identification service process is started and stopped and is used for the NLP gateway to automatically discover the service process.

Description

NLP model training and publishing recognition system
Technical Field
The invention relates to the field of model training, in particular to an NLP model training and issuing recognition system.
Background
NLP (Natural Language Processing) is a sub-field of Artificial Intelligence (AI), and at present, conventional NLP needs to be trained, released and started by a model before normal interface service can be provided, and these three processes need to be manually operated on a server by a developer in the industry, and each time the NLP language identification model is released and trained, there is a risk of misoperation, and faults are easily caused by human operation errors.
In addition, after the NLP language identification service process is started, manual configuration is needed, for example, a service call interface is provided for the outside, the relationship between the nginx service interface route and the NLP identification model as well as the service process is needed to be configured, the process is tedious, the operation and maintenance work is not suitable for daily development, and the operation and maintenance work has certain mechanical repeatability.
Disclosure of Invention
The technical problem to be solved by the invention is to provide an NLP recognition model training and publishing recognition system, which can simplify NLP language recognition service configuration, improve NLP recognition model training efficiency, reduce the risk of incorrect configuration and missing configuration, and provide stable NLP language recognition service for service users.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
the invention provides an NLP model training and publishing recognition system, which comprises:
at least two GPU servers, wherein the GPU servers are used for machine learning calculation;
the NLP recognition model is a model file for starting NLP language identification service, which is obtained after algorithm fitting is carried out on the business text corpus;
the NLP language identification service process is a server system process for providing NLP language identification service outwards based on the NLP identification model file after the business text corpus training is completed;
the NLP gateway is used for routing and forwarding a request called by a service party to a server where an NLP speech awareness service process which is successfully started and is providing outside is located according to a preset rule, and a corresponding NLP speech awareness identification service provides an identification service;
the system comprises at least two GPU server resource scheduling instruction executors, wherein the GPU server resource scheduling instruction executors are used for executing scheduling instructions initiated by a resource scheduling center module;
the resource scheduling center module is used for distributing GPU server resources and coordinating executors to execute instructions in the following processes, and comprises the following steps:
training an NLP recognition model, publishing the NLP recognition model and synchronously changing the relationship between the NLP recognition model and service data;
and the service registry module is used for recording and deleting the service information from the registry when the NLP language identification service process is started and stopped and is used for the NLP gateway to automatically discover the service process.
In the above scheme, the NLP gateway dynamically discovers the NLP language identification service process which can provide services to the outside through the service registry module.
In the above scheme, the NLP recognition service process depends on the model file after the NLP recognition model training is completed.
In the above scheme, the resource scheduling center module is configured to generate the following operation instructions and arrangement modes in the training of the NLP recognition model, the publishing of the NLP recognition model, and the synchronous change of the relationship between the NLP recognition model and the service data, and the method includes:
training instructions, service start instructions, service shut-down instructions, and relationship synchronization instructions;
training arrangement mode, model release arrangement mode, model shut-down arrangement mode and relationship synchronization mode;
the training instruction is generated in a training arrangement mode, and after the resource scheduling center module distributes the GPU servers for NLP recognition model training, a corresponding GPU server resource scheduling instruction executor is called for model training;
the method comprises the steps that a service starting instruction and a service stopping instruction are used for a model issuing arrangement mode, a resource scheduling center module generates a service starting instruction or a service stopping instruction of a process latitude on a GPU server according to a single scheduling request issuing mode, and starts or stops an NLP language recognition service process on a specific GPU server, wherein the service stopping instruction is also used for the model stopping arrangement mode and is used for stopping a process providing service on the GPU server;
the relation synchronization instruction is generated in the relation synchronization mode and used for stopping the corresponding NLP language identification service process.
In the above scheme, the shutting down the NLP speech recognition service process by the resource scheduling center module includes:
generating an NLP language identification service process shutdown scheduling record table, wherein the state is creation;
inquiring the corresponding NLP language identification service process, the state and the business concept industry id of the NLP language identification service process according to the ip and port of the NLP language identification service process which is required to be shut down and scheduled;
when the NLP language identification service process is in an operation state, the GPU server resource operation lock corresponding to the ip and port of the NLP language identification service process is preempted;
removing the NLP language identification service process which is correspondingly registered by the registration center according to the queried industry id of the NLP language identification service process, the ip of the NLP language identification service process and the port of the NLP language identification service process;
invoking a CPU server resource scheduling instruction executor to execute the operation of closing the NLP language consciousness service process;
when the call is successful, the GPU server resource operation lock is released and the shut-down scheduling record state is updated to be successful,
the NLP service process resource is marked as idle.
In the above scheme, the method for shutting down the NLP speech recognition service process by the resource scheduling center module further comprises the following steps:
and when the CPU server resource scheduling instruction executor is failed to be called, releasing the GPU server resource operation lock, updating the state of the shut-down scheduling record table as failure, and adding a failure reason.
In the above scheme, the system is further used for NLP service process capacity expansion release, and includes:
acquiring URL addresses, industry ids and scene ids of the trained NLP recognition model, and expanding the capacity of the NLP recognition model;
generating capacity expansion scheduling records corresponding to the capacity expansion quantity according to the industry id and the scene id, and updating the capacity expansion scheduling record state into a creation state;
inquiring and preempting the GPU server resource operation locks corresponding to the idle NLP language identification service processes ip and port;
calling a CPU server resource scheduling instruction executor to perform release operation;
updating the scheduling state of the NLP language identification service process to be successful and running;
registering the NLP language-aware service process successfully scheduled to a registry and providing flow access;
and marking that the scheduling NLP language consciously services the process resources successfully, and releasing the GPU server resource operation lock.
In the above scheme, the NLP service process capacity expansion release further includes the following steps:
when the number of the idle NLP language identification service processes is 0, updating the capacity expansion scheduling record state to be the capacity expansion release failure, and marking the capacity expansion scheduling record state as insufficient resources.
In the above scheme, the NLP service process capacity expansion release further includes the following steps:
before the GPU server resource operation locks corresponding to the idle NLP language identification service processes ip and port are preempted, inquiring whether the NLP service process is expanded to a preset threshold value, and if so, ending the expansion and release of the NLP language identification service process.
In the above scheme, the NLP service process capacity expansion release further includes the following steps:
if the CPU server resource scheduling instruction executor is called to carry out the release operation failure, updating the capacity expansion scheduling record state to be the capacity expansion release failure, and marking the capacity expansion scheduling record state as the execution failure of the CPU server resource scheduling instruction executor.
In the above scheme, the system is further used for NLP service process rolling release, and includes:
generating a rolling scheduling record of the NLP language identification service process rolling release, and updating the rolling scheduling record state into a creation state;
inquiring and preempting idle NLP language identification service process resources;
updating the idle NLP language identification service process resources which are successfully preempted into a locking state;
calling a CPU server resource scheduling instruction executor to execute NLP language consciousness service process capacity expansion release;
updating the capacity expansion scheduling record state to be a successful state, operating the industry id and the scene id corresponding to the rolling release of the NLP language identification service process to perform the rolling release scheduling of the NLP language identification service process, and generating a rolling release scheduling record;
and updating the NLP language identification service process to perform rolling release until the list to be rolled release is traversed.
In the above scheme, the NLP service process rolling release further comprises the following steps: after traversing NLP language conscious service process rolling release, locking server process resources; and (5) registering the corresponding service, and calling a CPU server resource scheduling instruction executor to shut down the corresponding service.
In the above scheme, the synchronization of the NLP speech recognition service process relationship by the resource scheduling center module includes:
acquiring a relation file URL address, an industry id and a scene id of the trained NLP recognition model;
inquiring the NLP language identification service process in operation according to the industry id and the scene id;
invoking a CPU server resource scheduling instruction executor to execute relation synchronization;
and (3) recording the synchronization condition of the NLP recognition model and the business scene relation by the log.
In the above scheme, the training of the NLP recognition model by the resource scheduling center module includes:
acquiring an NLP recognition model training corpus address, an uploading URL address after NLP recognition model training is completed, and industry id and scene id related to the NLP recognition model;
generating a training schedule record, and setting the training schedule record as a creation state;
querying server resources available for training and locking;
updating the locked server resource for training into a locking state;
generating a training record, and marking the training scheduling record state as scheduling;
invoking a CPU server resource scheduling instruction executor to execute a training instruction, and updating a training scheduling record into training;
releasing the locked server resources to wait for the CPU server resource scheduling instruction executor to finish the execution of the training instruction and callback the training result;
and obtaining the NLP recognition model after training.
In the above scheme, the training of the NLP recognition model by the resource scheduling center module further includes:
and when the CPU server resource scheduling instruction executor is called to execute the training instruction fails, releasing the locked server resource and marking the training scheduling record as failure.
The beneficial effects of the invention are as follows:
the invention provides a model training release identification system, which can effectively reduce the frequency of human intervention operation, reduce the fault rate of human operation and improve release/training efficiency by simplifying NLP service configuration; the NLP recognition model which is trained at present can be found, and NLP interface service can be provided for the outside.
Drawings
FIG. 1 is a schematic diagram of a model training and publishing identification system according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of shutting down an NLP speech recognition service process by a resource scheduling center module according to an embodiment of the present invention;
fig. 3 is a schematic flow diagram of an exemplary model training distribution recognition system for NLP service process dilatation distribution according to an embodiment of the present invention;
FIG. 4 is a schematic flow diagram of an exemplary model training release identification system for NLP service process rolling release according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart of a resource scheduling center module for synchronizing NLP language identification service process relationships in an example provided by an embodiment of the present invention;
fig. 6 is a schematic flow chart of training an NLP recognition model by a resource scheduling center module in an example provided by an embodiment of the present invention.
Detailed Description
The technical solution of the present invention will be further specifically described below by means of specific embodiments, and with reference to the accompanying drawings, it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment of the invention provides a model training release identification system, which can simplify NLP service configuration, improve release and training efficiency of an NLP identification model and provide NLP interface service to the outside.
The following describes in detail the technical solutions provided by the embodiments of the present invention with reference to the accompanying drawings.
An embodiment of the present invention provides a model training release recognition system, as shown in fig. 1, where the system includes:
at least two GPU servers 10, the GPU servers 10 being used for machine learning computations;
the NLP recognition model 20 is a model file for starting NLP language identification service, which is obtained after the NLP recognition model 20 is obtained by carrying out algorithm fitting on the business text corpus;
an NLP speech recognition service process 30, wherein the NLP speech recognition service process 30 is a server system process for providing an NLP speech recognition service to the outside based on an NLP recognition model file after the completion of the business text corpus training;
at least one NLP gateway 40, where the NLP gateway 40 is configured to route a request invoked by a service party according to a preset rule to a server where an NLP speech recognition service process 30 that is successfully providing to the outside is started, and provide an identification service by a corresponding NLP speech recognition service;
at least two GPU server resource scheduling instruction executors 50, where the GPU server resource scheduling instruction executors 50 are configured to execute the scheduling instruction initiated by the resource scheduling center module 60;
at least one resource scheduling center module 60, where the resource scheduling center module 60 is configured to allocate resources of the GPU server 10 and coordinate execution of instructions by an executor in the following process, and includes:
training of the NLP recognition model 20, issuing of the NLP recognition model 20 and synchronous change of the relationship between the NLP recognition model 20 and the business data;
the resource scheduling center module 60 is configured to generate the following operation instructions and arrangement modes in the training of the NLP recognition model 20, the issuing of the NLP recognition model 20, and the synchronous modification of the relationship between the NLP recognition model 20 and the service data, and includes:
training instructions, service start instructions, service shut-down instructions, and relationship synchronization instructions;
training arrangement mode, model release arrangement mode, model shut-down arrangement mode and relationship synchronization mode;
the training instructions are generated in a training arrangement mode, after the NLP recognition model 20 is trained by the resource scheduling center module 60 to be distributed with the GPU server 10, the corresponding GPU server resource scheduling instruction executor 50 is called to be trained;
the service start instruction and the service stop instruction are used for a model release arrangement mode, the resource scheduling center module 60 generates a service start instruction or a service stop instruction of a process latitude on the GPU server 10 according to a single scheduling request release mode, and starts or stops the NLP language identification service process 30 on the specific GPU server 10, wherein the service stop instruction is also used for the model stop arrangement mode and is used for stopping a process providing service on the GPU server 10;
the relationship synchronization instruction is generated in the relationship synchronization mode and is used for shutting down the corresponding NLP language identification service process 30;
at least one service registry module 70, said service registry module 70 being for the NLP language aware service process start and stop 30 while recording and deleting service information from the registry and for the NLP gateway 40 to discover service processes.
In one example, the NLP gateway 40 dynamically discovers, through a service registry module 70, NLP language-aware service processes 30 that may already provide services externally.
In one example, the NLP recognition service process 30 relies on the model file after the NLP recognition model 20 has been trained.
In one example, as shown in fig. 2, the resource scheduling center module shuts down the NLP speech recognition service process includes:
s201, generating an NLP language identification service process shutdown scheduling record table, wherein the state is creation;
s202, inquiring the corresponding NLP language identification service process, the state and the business concept industry id of the NLP language identification service process according to the ip and port of the NLP language identification service process which is required to be shut down and scheduled;
s203, when the NLP language identification service process is in an operation state, the GPU server resource operation lock corresponding to the ip and port of the NLP language identification service process is preempted;
s204, removing the NLP language identification service process which is correspondingly registered by the registry according to the queried industry id of the NLP language identification service process, the ip of the NLP language identification service process and the port of the NLP language identification service process;
s205, calling a CPU server resource scheduling instruction executor to execute the operation of closing the NLP language identification service process;
for step S205, when the call to the CPU server resource scheduling instruction executor fails, the GPU server resource operation lock is released, and the shutdown schedule record table state is updated as a failure, and a failure reason is added.
And S206, when the call is successful, releasing the GPU server resource operation lock and updating the shut-down scheduling record state to be successful, and marking the NLP service process resource as idle.
In one example, as shown in fig. 3, the system is further used for NLP service process capacity expansion release, including:
s301, acquiring URL addresses, industry ids and scene ids of the trained NLP recognition model, and expanding the capacity release capacity expansion quantity;
s302, generating capacity expansion scheduling records corresponding to the capacity expansion quantity according to industry id and scene id, and updating the capacity expansion scheduling record state to be a creation state;
s303, inquiring and preempting GPU server resource operation locks corresponding to idle NLP language identification service processes ip and port;
for step S303, when the number of idle NLP speech recognition service processes is 0, the capacity expansion scheduling record status is updated to be the capacity expansion release failure, and the resource is marked as insufficient.
S304, a GPU server resource scheduling instruction executor is called to carry out release operation;
for step S304, if the execution of the GPU server resource scheduling instruction executor is called to fail to issue, the capacity expansion scheduling record state is updated to fail to issue, and the execution failure of the GPU server resource scheduling instruction executor is marked.
S305, updating the NLP language identification service process scheduling state to be successful and running;
s306, registering the NLP language-aware service process successfully scheduled to a registry and providing flow access;
s307, marking that the scheduling NLP language conscious service process resource is successful, and releasing the GPU server resource operation lock.
In one example, before the GPU server resource operation lock corresponding to the idle NLP language identification service process ip and port is preempted, whether the NLP service process has been expanded to a preset threshold value is queried, and if so, the NLP language identification service process expansion release is ended.
In one example, as shown in fig. 4, the system is also used for NLP service process rolling release, including:
s401, generating a rolling schedule record of the NLP language identification service process rolling release, and updating the rolling schedule record state into a creation state;
s402, inquiring and preempting idle NLP language consciousness service process resources;
s403, updating the idle NLP language awareness service process resource which is successfully preempted into a locking state;
s404, calling a CPU server resource scheduling instruction executor to execute NLP language identification service process capacity expansion release;
s405, updating the capacity expansion scheduling record state to be a successful state, operating an industry id and a scene id corresponding to the rolling release of the NLP language identification service process to perform the rolling release scheduling of the NLP language identification service process, and generating a rolling release scheduling record;
s406, updating the NLP language identification service process to perform rolling release until the list to be rolled release is traversed.
Aiming at step S406, after the NLP language conscious service process is traversed and issued in a rolling way, locking the server process resource; and (5) registering the corresponding service, and calling a CPU server resource scheduling instruction executor to shut down the corresponding service.
In one example, as shown in fig. 5, the resource scheduling center module synchronizes NLP speech recognition service process relationships, including:
s501, acquiring a relation file URL address, an industry id and a scene id of the trained NLP recognition model;
s502, inquiring an NLP language identification service process in operation according to industry id and scene id;
s503, calling a GPU server resource scheduling instruction executor to execute relation synchronization;
s504, the synchronization condition of the relationship between the NLP recognition model and the business scene is recorded in the log.
In one example, as shown in fig. 6, the resource scheduling center module trains the NLP recognition model including:
s601, acquiring an NLP recognition model training corpus address, an uploading URL address after NLP recognition model training is completed, and industry id and scene id related to the NLP recognition model;
s602, generating a training schedule record, and setting the training schedule record as a creation state;
s603, inquiring server resources available for training and locking;
s604, updating the locked server resource for training into a locking state;
s605, generating a training record, and marking the training scheduling record state as scheduling;
s606, calling a GPU server resource scheduling instruction executor to execute a training instruction, and updating a training scheduling record into training;
for step S606, when the execution of the training instruction by the GPU server resource scheduling instruction executor fails, the locked server resource is released and the training scheduling record is marked as failed.
S607, releasing the locked server resources to wait for the GPU server resource scheduling instruction executor to finish the execution of the training instruction and callback the training result;
s608, obtaining the NLP recognition model after training.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative, and not restrictive; also, it is within the scope of the present invention to provide those skilled in the art with modifications in the detailed description and the application range based on the spirit of the invention.

Claims (14)

  1. NLP model training release recognition system, characterized in that the system comprises:
    at least two GPU servers, wherein the GPU servers are used for machine learning calculation;
    the NLP recognition model is a model file for starting NLP language identification service, which is obtained after algorithm fitting is carried out on the business text corpus;
    the NLP language identification service process is a server system process for providing NLP language identification service outwards based on the NLP identification model file after the business text corpus training is completed;
    the NLP gateway is used for routing and forwarding a request called by a service party to a server where an NLP speech awareness service process which is successfully started and is providing outside is located according to a preset rule, and a corresponding NLP speech awareness identification service provides an identification service;
    the system comprises at least two GPU server resource scheduling instruction executors, wherein the GPU server resource scheduling instruction executors are used for executing scheduling instructions initiated by a resource scheduling center module;
    the resource scheduling center module is used for generating the following operation instructions and arrangement modes in the process of training the NLP recognition model, issuing the NLP recognition model and synchronously changing the relationship between the NLP recognition model and the business data, and comprises the following steps of: training instructions, service start instructions, service shut-down instructions, and relationship synchronization instructions; training arrangement mode, model release arrangement mode, model shut-down arrangement mode and relationship synchronization mode;
    the training instruction is generated in a training arrangement mode, and after the resource scheduling center module distributes the GPU servers for NLP recognition model training, a corresponding GPU server resource scheduling instruction executor is called for model training;
    the method comprises the steps that a service starting instruction and a service stopping instruction are used for a model issuing arrangement mode, a resource scheduling center module generates a service starting instruction or a service stopping instruction of a process dimension on a GPU server according to a single scheduling request issuing mode, and starts or stops an NLP language recognition service process on a specific GPU server, wherein the service stopping instruction is also used for the model stopping arrangement mode and is used for stopping a process providing service on the GPU server;
    the relation synchronization instruction is generated in a relation synchronization mode and used for closing a corresponding NLP language identification service process;
    and the service registry module is used for recording and deleting the service information from the registry when the NLP language identification service process is started and stopped and is used for the NLP gateway to automatically discover the service process.
  2. 2. The NLP model training release recognition system of claim 1, wherein the NLP gateway dynamically discovers NLP language recognition service processes that can already provide services to the outside through a service registry module.
  3. 3. The NLP model training release recognition system of claim 1, wherein the NLP speech recognition service process relies on a model file after NLP recognition model training is completed.
  4. 4. The NLP model training release identification system of claim 1, wherein the resource scheduling center module shuts down an NLP speech recognition service process comprising:
    generating an NLP language identification service process shutdown scheduling record table, wherein the state is creation;
    inquiring the corresponding NLP language identification service process, the state and the business concept industry id of the NLP language identification service process according to the ip and port of the NLP language identification service process which is required to be shut down and scheduled;
    when the NLP language identification service process is in an operation state, the GPU server resource operation lock corresponding to the ip and port of the NLP language identification service process is preempted;
    removing the NLP language identification service process which is correspondingly registered by the registration center according to the queried industry id of the NLP language identification service process, the ip of the NLP language identification service process and the port of the NLP language identification service process;
    invoking a CPU server resource scheduling instruction executor to execute the operation of closing the NLP language consciousness service process;
    and when the call is successful, releasing the GPU server resource operation lock and updating the shut-down scheduling record state to be successful, and marking NLP service process resources to be idle.
  5. 5. The NLP model training release identification system of claim 4, wherein the resource scheduling center module shuts down an NLP speech recognition service process further comprising the steps of: and when the CPU server resource scheduling instruction executor is failed to be called, releasing the GPU server resource operation lock, updating the state of the shut-down scheduling record table as failure, and adding a failure reason.
  6. 6. The NLP model training release identification system of claim 1, wherein the system is further configured for NLP service process dilatation release, comprising:
    acquiring URL addresses, industry ids and scene ids of the trained NLP recognition model, and expanding the capacity of the NLP recognition model;
    generating capacity expansion scheduling records corresponding to the capacity expansion quantity according to the industry id and the scene id, and updating the capacity expansion scheduling record state into a creation state;
    inquiring and preempting the GPU server resource operation locks corresponding to the idle NLP language identification service processes ip and port;
    calling a CPU server resource scheduling instruction executor to perform release operation;
    updating the scheduling state of the NLP language identification service process to be successful and running;
    registering the NLP language-aware service process successfully scheduled to a registry and providing flow access;
    and marking that the scheduling NLP language consciously services the process resources successfully, and releasing the GPU server resource operation lock.
  7. 7. The NLP model training release identification system of claim 6, wherein the NLP service process dilatation release further comprises the steps of: when the number of the idle NLP language identification service processes is 0, updating the capacity expansion scheduling record state to be the capacity expansion release failure, and marking the capacity expansion scheduling record state as insufficient resources.
  8. 8. The NLP model training release identification system of claim 6, wherein the NLP service process dilatation release further comprises the steps of: before the GPU server resource operation locks corresponding to the idle NLP language identification service processes ip and port are preempted, inquiring whether the NLP service process is expanded to a preset threshold value, and if so, ending the expansion and release of the NLP language identification service process.
  9. 9. The NLP model training release identification system of claim 6, wherein the NLP service process dilatation release further comprises the steps of: if the CPU server resource scheduling instruction executor is called to carry out the release operation failure, updating the capacity expansion scheduling record state to be the capacity expansion release failure, and marking the capacity expansion scheduling record state as the execution failure of the CPU server resource scheduling instruction executor.
  10. 10. The NLP model training release identification system of claim 1, wherein the system is further configured for NLP service process rolling release, comprising:
    generating a rolling scheduling record of the NLP language identification service process rolling release, and updating the rolling scheduling record state into a creation state;
    inquiring and preempting idle NLP language identification service process resources;
    updating the idle NLP language identification service process resources which are successfully preempted into a locking state;
    calling a CPU server resource scheduling instruction executor to execute NLP language consciousness service process capacity expansion release;
    updating the capacity expansion scheduling record state to be a successful state, operating the industry id and the scene id corresponding to the rolling release of the NLP language identification service process to perform the rolling release scheduling of the NLP language identification service process, and generating a rolling release scheduling record;
    and updating the NLP language identification service process to perform rolling release until the list to be rolled release is traversed.
  11. 11. The NLP model training release identification system of claim 10, wherein the NLP service process rolling release further comprises the steps of: after traversing NLP language conscious service process rolling release, locking server process resources; and (5) registering the corresponding service, and calling a CPU server resource scheduling instruction executor to shut down the corresponding service.
  12. 12. The NLP model training release identification system of claim 1, wherein the resource scheduling center module synchronizes NLP language-aware services process relationships comprising:
    acquiring a relation file URL address, an industry id and a scene id of the trained NLP recognition model;
    inquiring the NLP language identification service process in operation according to the industry id and the scene id;
    invoking a CPU server resource scheduling instruction executor to execute relation synchronization;
    and (3) recording the synchronization condition of the NLP recognition model and the business scene relation by the log.
  13. 13. The NLP model training release identification system of claim 1, wherein the resource scheduling center module trains the NLP model identification comprising:
    acquiring an NLP recognition model training corpus address, an uploading URL address after NLP recognition model training is completed, and industry id and scene id related to the NLP recognition model;
    generating a training schedule record, and setting the training schedule record as a creation state;
    querying server resources available for training and locking;
    updating the locked server resource for training into a locking state;
    generating a training record, and marking the training scheduling record state as scheduling;
    invoking a CPU server resource scheduling instruction executor to execute a training instruction, and updating a training scheduling record into training;
    releasing the locked server resources to wait for the CPU server resource scheduling instruction executor to finish the execution of the training instruction and callback the training result;
    and obtaining the NLP recognition model after training.
  14. 14. The NLP model training release identification system of claim 13, wherein the resource scheduling center module trains the NLP identification model further comprising: and when the CPU server resource scheduling instruction executor is called to execute the training instruction fails, releasing the locked server resource and marking the training scheduling record as failure.
CN202010853842.7A 2020-08-24 2020-08-24 NLP model training and publishing recognition system Active CN111967613B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010853842.7A CN111967613B (en) 2020-08-24 2020-08-24 NLP model training and publishing recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010853842.7A CN111967613B (en) 2020-08-24 2020-08-24 NLP model training and publishing recognition system

Publications (2)

Publication Number Publication Date
CN111967613A CN111967613A (en) 2020-11-20
CN111967613B true CN111967613B (en) 2023-06-16

Family

ID=73390752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010853842.7A Active CN111967613B (en) 2020-08-24 2020-08-24 NLP model training and publishing recognition system

Country Status (1)

Country Link
CN (1) CN111967613B (en)

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4640335B2 (en) * 2003-01-20 2011-03-02 デル・プロダクツ・エル・ピー Data storage system
AU2005318955A1 (en) * 2004-12-21 2006-06-29 Bmc Software, Inc. System and method for business service management and building business service model
CN105205735A (en) * 2015-10-08 2015-12-30 南京南瑞继保电气有限公司 Power dispatching data cloud service system and implementation method
US20170124497A1 (en) * 2015-10-28 2017-05-04 Fractal Industries, Inc. System for automated capture and analysis of business information for reliable business venture outcome prediction
US10831762B2 (en) * 2015-11-06 2020-11-10 International Business Machines Corporation Extracting and denoising concept mentions using distributed representations of concepts
CN105930191B (en) * 2016-04-28 2019-01-04 网宿科技股份有限公司 The overloaded method and device of system service
US20170345112A1 (en) * 2016-05-25 2017-11-30 Tyco Fire & Security Gmbh Dynamic Threat Analysis Engine for Mobile Users
US10332505B2 (en) * 2017-03-09 2019-06-25 Capital One Services, Llc Systems and methods for providing automated natural language dialogue with customers
CN109271602B (en) * 2018-09-05 2020-09-15 腾讯科技(深圳)有限公司 Deep learning model publishing method and device
CN110795529B (en) * 2019-09-05 2023-07-25 腾讯科技(深圳)有限公司 Model management method and device, storage medium and electronic equipment
CN110659261A (en) * 2019-09-19 2020-01-07 成都数之联科技有限公司 Data mining model publishing method, model and model service management method
CN110688473A (en) * 2019-10-09 2020-01-14 浙江百应科技有限公司 Method for robot to dynamically acquire information
CN111400081A (en) * 2020-03-24 2020-07-10 恒生电子股份有限公司 Process guarding method and device, electronic equipment and computer storage medium
CN111444021B (en) * 2020-04-02 2023-03-24 电子科技大学 Synchronous training method, server and system based on distributed machine learning

Also Published As

Publication number Publication date
CN111967613A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
CN113067900B (en) Intelligent contract deployment method and device
WO2016082594A1 (en) Data update processing method and apparatus
JPH08263309A (en) Method and apparatus for notification of event between software application program objects
CN111405130B (en) Voice interaction system and method
US8301750B2 (en) Apparatus, system, and method for facilitating communication between an enterprise information system and a client
CN106897345A (en) A kind of method and device of data storage
CN111967613B (en) NLP model training and publishing recognition system
WO2024051454A1 (en) Method and apparatus for processing transaction log
CN111339055B (en) Big data cluster capacity expansion method and device
CN113094125A (en) Business process processing method, device, server and storage medium
CN104111862A (en) Method and system for obtaining IP (Internet Protocol) address of virtual machine in cloud computing platform
WO2024021471A1 (en) Service updating method, apparatus and system, and storage medium
CN111737348B (en) Aging synchronization method and device based on database table
CN112350837B (en) Cloud platform-based power application cluster management method and device
JP2010183238A (en) Call-recording system
CN114172821B (en) Service state synchronization method and device and server
CN109981741A (en) A kind of maintaining method of distributed memory system
CN112115303B (en) Data processing method and device
CN115250197B (en) Device for automatically creating container discovery service
CN117076004B (en) Micro-service packaging and merging method and device and electronic equipment
JP2007242051A (en) Device for mounting/executing business logic program
CN104809033B (en) A kind of backup method and system
CN115391463A (en) Data synchronization method and device and server cluster
CN115168487A (en) Data synchronization method, assembly, equipment and medium based on button
WO2020124282A1 (en) Information management system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: NLP model training release recognition system

Effective date of registration: 20231108

Granted publication date: 20230616

Pledgee: Guotou Taikang Trust Co.,Ltd.

Pledgor: ZHEJIANG BYAI TECHNOLOGY Co.,Ltd.

Registration number: Y2023980064435

PE01 Entry into force of the registration of the contract for pledge of patent right