CN111200606A - Deep learning model task processing method, system, server and storage medium - Google Patents

Deep learning model task processing method, system, server and storage medium Download PDF

Info

Publication number
CN111200606A
CN111200606A CN201911423263.2A CN201911423263A CN111200606A CN 111200606 A CN111200606 A CN 111200606A CN 201911423263 A CN201911423263 A CN 201911423263A CN 111200606 A CN111200606 A CN 111200606A
Authority
CN
China
Prior art keywords
deep learning
learning model
task
service
model task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911423263.2A
Other languages
Chinese (zh)
Inventor
熊为星
熊友军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN201911423263.2A priority Critical patent/CN111200606A/en
Publication of CN111200606A publication Critical patent/CN111200606A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/133Protocols for remote procedure calls [RPC]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/562Brokering proxy services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Abstract

The application discloses a deep learning model task processing method, a system, a server and a storage medium. The deep learning model task processing method comprises the following steps: the remote procedure call service receives a deep learning model task request and stores the deep learning model task request to the message middleware; the message middleware distributes the deep learning model task request to a plurality of task execution units of a distributed task queue; and the plurality of task execution units execute and process the deep learning model task request simultaneously to obtain an execution result. Through the mode, the technical cost can be reduced, and the deep learning model task processing service can be effectively and conveniently provided.

Description

Deep learning model task processing method, system, server and storage medium
Technical Field
The present application relates to the field of intelligent services, and in particular, to a method, a system, a server, and a storage medium for processing deep learning model tasks.
Background
With the development of intelligent services, more and more fields use pre-trained models to perform target detection, target recognition and the like, such as deep learning models. However, the deployment training of the model is a complex process, which can be roughly divided into six steps, firstly, we determine the work task, determine that the model may need to be roughly designed after the work task is completed, determine what data the model needs roughly after the model is completed, proceed to collect data, adjust the collected data and use the data in training the model, and use the trained model in test set to test and verify the effect of the trained model. If the effect does not meet the on-line requirement, the user needs to go back to collect more data or readjust the model design until the test accuracy of the trained model meets the on-line standard. In addition, the model is generally deployed on a GPU server, and needs to consider the model selection of the server, the model selection of dependent software, the planning of the environment, and the like. The complex model deployment scheme has operating system requirements, and not only needs to support a model derived by a model framework, but also needs to schedule heterogeneous computing resources such as a CPU (central processing unit), a GPU (graphics processing unit) and the like at the back end. This makes the model processing deep learning task very costly.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a deep learning model task processing method, a system, a server and a storage medium, which can reduce the technical cost of deep learning model task processing and effectively and conveniently provide deep learning model task processing service.
In order to solve the technical problem, the application adopts a technical scheme that: a deep learning model task processing method is provided, and comprises the following steps: the remote procedure call service receives a deep learning model task request and stores the deep learning model task request to the message middleware; the message middleware distributes the deep learning model task request to a plurality of task execution units of a distributed task queue; and the plurality of task execution units execute and process the deep learning model task request simultaneously to obtain an execution result.
The plurality of task execution units store the execution result to the message middleware.
The remote procedure call service calls an execution result in the message middleware and returns the execution result to the user side.
Wherein, the remote procedure call service receiving deep learning model task request comprises: the proxy service receives a deep learning model task request and monitors a service port of a remote process call service; the proxy service forwards the deep learning model task request to the remote procedure call service.
The remote process call service, the message middleware and the distributed task queue are integrated into a task processing service, the number of the task processing services is at least two, and the step that the proxy service forwards the deep learning model task request to the remote process call service comprises the following steps: the proxy service randomly forwards the deep learning model task request to a remote procedure call service of any one of the task processing services.
The distributed task queue is Celery, the message middleware is Rabbit MQ, the remote procedure call service is gRPC, and the proxy service is nginx.
In order to solve the above technical problem, another technical solution adopted by the present application is: provided is a deep learning model task processing system, including: the remote process calling service module is used for receiving a deep learning model task request and storing the deep learning model task request to the message middleware; the message middleware module is used for distributing the deep learning model task request to a plurality of task execution units of a distributed task queue; the plurality of task execution modules are used for concurrently executing and processing the deep learning model task request to obtain an execution result.
The remote process call service module, the message middleware module and the plurality of task execution modules are integrated into a task processing service module, and the number of the task processing service modules is at least two.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided a server comprising a processor for executing instructions to implement any of the deep learning model task processing methods described above.
In order to solve the above technical problem, the present application adopts another technical solution: there is provided a computer readable storage medium for storing instructions/program data capable of being executed with any of the deep learning model task processing methods described above.
The beneficial effect of this application is: the deep learning model task processing method is realized by using a deep learning model task processing system, the deep learning model task processing system comprises a distributed task queue, the distributed task queue comprises a plurality of task execution units, and the deep learning model task request is executed and processed by using the plurality of task execution units concurrently to obtain an execution result so as to realize the processing of the deep learning model task. According to the method and the device, the distributed task queue is adopted to perform distributed concurrent processing on the deep learning model task processing task, so that the efficiency is improved, and the technical requirement is reduced, therefore, the technical cost can be reduced, and the deep learning model task processing service can be effectively and conveniently provided.
Drawings
FIG. 1 is a schematic structural diagram of a deep learning model task processing system in an embodiment of the present application;
FIG. 2 is a schematic flow chart of a deep learning model task processing method in an embodiment of the present application;
FIG. 3 is a schematic structural diagram of another deep learning model task processing system in an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a computer-readable storage medium in an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solution and effect of the present application clearer and clearer, the present application is further described in detail below with reference to the accompanying drawings and examples.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a deep learning model task processing system according to an embodiment of the present disclosure. The deep learning model task processing system can be used for training, deploying, online publishing and the like of a deep learning model. The deep learning model task processing system 100 includes an agent service 10 and a task processing service 20.
The proxy service 10 is an intermediate service between a user side system and a task processing service 20 system, and with the proxy service 10, a deep learning model task request from a user side does not directly go to the task processing service 20, but first goes to the proxy service 10, and then is forwarded to the task processing service 20 by the proxy service 10; also, the execution result returned by the task processing service 20 is not directly sent to the user side, but is sent to the proxy service 10 and then forwarded to the user side by the proxy service 10. The proxy service 10 may have a certain caching capability, and cache part of the deep learning model task request and/or execution result, which can improve the interaction speed and efficiency between the user side and the task processing service 20. The agent service 10 may also have security filtering and flow control capabilities, and may perform filtering management and control on the deep learning model task request and/or the execution result, so as to protect the interaction security between the user side and the task processing service 20. In one embodiment, nginx with less memory and strong concurrency capability can be used as the proxy service.
The task processing service 20 is a core part of the deep learning model task processing system 100, and is used for substantially executing and processing deep learning model task requests. The task processing service 20 includes a remote procedure call service 21 and a distributed task queue 22.
Remote procedure call service (RPC) 21 is used to provide a set of mechanisms to allow communication between applications and to comply with the server/client model. When the method is used, the client calls the interface provided by the server, so that the method is as quick and convenient as calling a local function. In one embodiment, a gRPC may be employed as a remote procedure call service.
The gPC can define the interface through protobuf, so that stricter interface constraint conditions can be met. In addition, data can be serialized into binary coding through protobuf, which can greatly reduce the amount of data to be transmitted, thereby greatly improving performance. The gRPC can also conveniently support streaming communication (theoretically streaming mode can be used through http2.0, but restful api of web services seems rarely used, and general streaming data applications such as video streaming, voice streaming, and general using special protocols such as HLS, RTMP, etc. which are not web services but have special server applications) which is crucial in deep learning tasks such as service deployment and speech recognition, or video picture classification.
However, we do not typically use grpcs alone, but rather as a component, because in a production environment, we are faced with large concurrency, and the grpcs do not provide some of the necessary components associated with the distributed system to handle it. Moreover, real online services also need to provide the necessary components including load balancing, current limiting fusing, monitoring alarms, service registration and discovery, etc. In this embodiment, the gRPC is used as one component with a distributed queue.
The distributed task queue 22 is a core part of the task processing service 20, and the distributed task queue 22 is a distributed system that can handle a large number of messages, and in combination with the gPRC, can service large concurrent requests in a production environment.
The distributed task queue 22 mainly comprises three parts, namely a message middleware 221(Broker), a task execution unit 222(Worker), and a task execution result storage (Backend) unit (not shown). Where message middleware 221 is primarily used for decoupling between components, the sender of a message need not be aware of the existence of the message consumer through message middleware 221, and vice versa. The task execution unit 222 is a unit for the distributed task queue 22 to perform task processing, and the task execution unit 222 runs in a distributed system node concurrently. The task execution result storage (Backend) is used for storing the result of the task executed by the task execution unit. In one embodiment, Celery may be used as a distributed task queue.
Celery is a simple, flexible and reliable distributed system capable of handling large amounts of messages, which in combination with gPCs can serve requests in a large concurrent production environment. Celery does not provide request services per se, but can be conveniently integrated with third party provided messaging middleware such as RabbitMQ, Redis, etc. The RabbitMQ is an open-source AMQP implementation, and the server side is written in Erlang language and supports various clients, such as: python, Ruby, Java, C, PHP and the like, support AJAX, are used for storing and forwarding messages in a distributed system, and have better performances in the aspects of usability, expansibility, high availability and the like. The AMQP, Advanced Message Queuing Protocol, is an open standard of an application layer Protocol, and is designed for Message-oriented middleware.
In an embodiment, two task processing services 20 can be provided in one deep learning model task processing system 100, such as the task processing service 20 and the task processing service 20 'shown in fig. 1, and the architecture and the service mode of the task processing service 20 and the task processing service 20' are completely the same. The two task processing services 20 may be distributed over two different servers, resulting in a high availability mode. If one server is down, another server can continue to provide the service. After receiving the deep learning model task request, the proxy service 10 may randomly send the deep learning model task request to any one of the task processing services 20, and the server where the task processing service is located processes the deep learning model task request.
The deep learning model task processing system 100 can achieve rapid deployment, elastic expansion and high service availability, avoids building a complex deployment environment, and meanwhile conducts elastic expansion to increase service supply when finding that server resources are insufficient.
In an embodiment, the configuration deep learning model processing system can be established according to actual requirements, and for example, the configuration deep learning model processing system can comprise three configuration files, namely celeryconfig. Py is a script configured by a Cerley service, and a message middleware address, a memory address, the number of workers started by the cell, and the like can be configured in the script. Py is a task monitoring service, and calls a prediction interface in a task. Py is a server started by the gPRC, receives a request service in a protobuf format in an agreed form, and calls the service monitored in task.
Based on this, the present application further provides a deep learning model task processing method, please refer to fig. 2, and fig. 2 is a schematic flow diagram of the deep learning model task processing method in the embodiment of the present application. In this embodiment, the deep learning model task processing method includes:
s310: the remote procedure call service receives a deep learning model task request.
The deep learning model task request is sent to the proxy service by the user side, and then is forwarded to the remote procedure call service by the proxy service.
The user can send the deep learning model task request to the proxy service through the network according to the self requirement, and the model requested to be deployed can be a deep learning model, such as a deep learning model developed based on Tensorflow, and can be other models.
The proxy service may monitor a service port of the remote procedure call service and may randomly forward the deep learning model task request to a different remote procedure call service. Generally, the resource use state of the remote procedure call service can be monitored, and the deep learning model task request is uniformly forwarded to each remote procedure call service so as to fully utilize the resource.
S320: the remote procedure call service forwards the deep learning model task request to message middleware of a distributed task queue.
The remote procedure call service is a server/client communication model. The presence of message middleware may prevent the task execution unit from directly accessing the remote procedure call service. The distributed task queue Cerley service is configured with a celeconfig. py script file, and the script file can be used for configuring a message middleware address, a memory address and the number of workers started by the cell. And the gPC server side sends the Nginx forwarded request to a message middleware RabbitMQ of the distributed task queue Cerley according to the distributed task queue Cerley service configuration.
S330: the message middleware distributes the deep learning model task request to a plurality of task execution units.
The message middleware RabbitMQ receives a deep learning model task request from a gPC server, the request can be stored according to a high-level message queue protocol, and then the message received by the message middleware is transferred to a task execution unit started by the cell to be executed by the Cerley. The message middleware can monitor the resource use condition of the task execution unit, reasonably distribute the deep learning model task and reasonably utilize resources.
S340: and the plurality of task execution units execute and process the deep learning model task request simultaneously to obtain an execution result.
The Cerley has a plurality of task execution units, the execution units are distributed and deployed on one or more servers, the task execution units can process the task requests concurrently and respectively after receiving the requests, and finally, processing results are collected to obtain execution results.
The task execution unit may store the execution result obtained by the processing to the message middleware. The remote procedure call service can call the execution result in the message middleware, send the execution result to the proxy service, and return the execution result to the user side by the proxy service.
Different from the prior art, the implementation mode adopts the distributed task queue to perform distributed concurrent processing on the deep learning model processing tasks, so that the efficiency is improved, and the technical requirements are reduced, therefore, the technical cost can be reduced, and the deep learning model processing service can be effectively and conveniently provided. The method and the system have the advantages that the rapid deployment, the elastic expansion and the high availability of the service can be realized, the complex deployment environment is avoided being set up, and meanwhile, the elastic expansion is carried out when the insufficient server resources are found, so that the service supply amount is increased.
Referring to fig. 3, fig. 3 is a schematic structural diagram of another deep learning model task processing system according to an embodiment of the present disclosure. In this embodiment, the deep learning model task processing system 200 includes a remote procedure call service module 210, a message middleware module 220, and a plurality of task execution modules 230.
The remote procedure call service module 210 is configured to receive deep learning model task requests and store the deep learning model task requests to the message middleware module 220. The message middleware module 220 is used to distribute deep learning model task requests to a plurality of task execution modules 230 of a distributed task queue. The plurality of task execution modules 230 are configured to concurrently execute and process the deep learning model task request to obtain an execution result.
In one embodiment, the remote procedure call service module 210, the message middleware module 220 and the plurality of task execution modules 230 are integrated into a task processing service module, at least two task processing service modules are provided in one deep learning model task processing system 200, and the two task processing service modules have the same architecture and service mode.
In the embodiment, the distributed task execution modules are utilized, so that the deep learning model task processing system can achieve rapid deployment, elastic expansion and high service availability, and elastic expansion is performed to increase the service supply when the situation that the complex deployment environment is built is found out.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a server according to an embodiment of the present application. In this embodiment, the server 40 includes a processor 41.
Processor 41 may also be referred to as a CPU (Central Processing Unit). The processor 41 may be an integrated circuit chip having signal processing capabilities. The processor 41 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor 41 may be any conventional processor or the like.
Server 40 may further include a memory (not shown) for storing instructions and data needed for operation of processor 41.
Processor 41 is configured to execute instructions to implement the methods provided by any of the embodiments of the deep learning model processing method of the present application and any non-conflicting combinations thereof.
The server may be an independent server or a server cluster composed of a plurality of servers.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present application. The computer readable storage medium 50 of the embodiments of the present application stores instructions/program data 51 that when executed enable the methods provided by any of the embodiments of the intra prediction methods of the present application, as well as any non-conflicting combinations. The instructions 51 may form a program file stored in the storage medium 50 in the form of a software product, so as to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the methods according to the embodiments of the present application. And the aforementioned storage medium 20 includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application.
The above embodiments are merely examples and are not intended to limit the scope of the present disclosure, and all modifications, equivalents, and flow charts using the contents of the specification and drawings of the present disclosure or those directly or indirectly applied to other related technical fields are intended to be included in the scope of the present disclosure.

Claims (10)

1. A deep learning model task processing method is characterized by comprising the following steps:
the remote procedure call service receives a deep learning model task request and stores the deep learning model task request to a message middleware;
the message middleware distributes the deep learning model task request to a plurality of task execution units of a distributed task queue;
and the task execution units execute and process the deep learning model task request simultaneously to obtain an execution result.
2. The deep learning model task processing method according to claim 1, wherein a plurality of the task execution units store the execution result to the message middleware.
3. The deep learning model task processing method of claim 2, wherein the remote procedure call service retrieves the execution result in the message middleware and returns the execution result.
4. The deep learning model task processing method of claim 1, wherein the receiving of the deep learning model task request by the remote procedure call service comprises:
the proxy service receives the deep learning model task request and monitors a service port of the remote process call service;
the proxy service forwards the deep learning model task request to the remote procedure call service.
5. The deep learning model task processing method of claim 4, wherein the remote procedure call service, the message middleware and the distributed task queue are integrated into a task processing service, the number of the task processing services is at least two, and the proxy service forwarding the deep learning model task request to the remote procedure call service comprises:
the proxy service randomly forwards the deep learning model task request to the remote procedure call service of any one of the task processing services.
6. The deep learning model task processing method of claim 4, wherein the distributed task queue is Celery, the message middleware is Rabbit MQ, the remote procedure call service is gRPC, and the proxy service is nginx.
7. A deep learning model task processing system is characterized by comprising a remote procedure call service module, a message middleware module and a plurality of task execution modules:
the remote process call service module is used for receiving a deep learning model task request and storing the deep learning model task request to the message middleware module;
the message middleware module is used for distributing the deep learning model task request to a plurality of task execution units of a distributed task queue;
the plurality of task execution modules are used for concurrently executing and processing the deep learning model task request to obtain an execution result.
8. The deep learning model task processing system of claim 7, wherein the remote procedure call service module, the message middleware module, and the plurality of task execution modules are integrated into a task processing service module, and the number of task processing service modules is at least two.
9. A server, characterized in that the server comprises a processor for executing instructions to implement the deep learning model task processing method according to any one of claims 1 to 6.
10. A computer-readable storage medium for storing instructions/program data executable to implement a deep learning model task processing method as claimed in any one of claims 1-6.
CN201911423263.2A 2019-12-31 2019-12-31 Deep learning model task processing method, system, server and storage medium Pending CN111200606A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911423263.2A CN111200606A (en) 2019-12-31 2019-12-31 Deep learning model task processing method, system, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911423263.2A CN111200606A (en) 2019-12-31 2019-12-31 Deep learning model task processing method, system, server and storage medium

Publications (1)

Publication Number Publication Date
CN111200606A true CN111200606A (en) 2020-05-26

Family

ID=70746737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911423263.2A Pending CN111200606A (en) 2019-12-31 2019-12-31 Deep learning model task processing method, system, server and storage medium

Country Status (1)

Country Link
CN (1) CN111200606A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111857734A (en) * 2020-06-19 2020-10-30 苏州浪潮智能科技有限公司 Deployment and use method of distributed deep learning model platform
CN111897828A (en) * 2020-07-31 2020-11-06 广州视源电子科技股份有限公司 Data batch processing implementation method, device, equipment and storage medium
CN112015553A (en) * 2020-08-27 2020-12-01 深圳壹账通智能科技有限公司 Data processing method, device, equipment and medium based on machine learning model
CN115756875A (en) * 2023-01-06 2023-03-07 山东未来网络研究院(紫金山实验室工业互联网创新应用基地) Online service deployment method and system of machine learning model for streaming data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108076098A (en) * 2016-11-16 2018-05-25 北京京东尚科信息技术有限公司 A kind of method for processing business and system
CN108920259A (en) * 2018-03-30 2018-11-30 华为技术有限公司 Deep learning job scheduling method, system and relevant device
CN109360646A (en) * 2018-08-31 2019-02-19 透彻影像(北京)科技有限公司 Pathology assistant diagnosis system based on artificial intelligence
CN109358944A (en) * 2018-09-17 2019-02-19 深算科技(重庆)有限公司 Deep learning distributed arithmetic method, apparatus, computer equipment and storage medium
CN110462589A (en) * 2016-11-28 2019-11-15 亚马逊技术有限公司 On-demand code in local device coordinator executes
EP3584703A1 (en) * 2018-06-20 2019-12-25 Aptiv Technologies Limited Over-the-air (ota) mobility services platform

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108076098A (en) * 2016-11-16 2018-05-25 北京京东尚科信息技术有限公司 A kind of method for processing business and system
CN110462589A (en) * 2016-11-28 2019-11-15 亚马逊技术有限公司 On-demand code in local device coordinator executes
CN108920259A (en) * 2018-03-30 2018-11-30 华为技术有限公司 Deep learning job scheduling method, system and relevant device
EP3584703A1 (en) * 2018-06-20 2019-12-25 Aptiv Technologies Limited Over-the-air (ota) mobility services platform
CN109360646A (en) * 2018-08-31 2019-02-19 透彻影像(北京)科技有限公司 Pathology assistant diagnosis system based on artificial intelligence
CN109358944A (en) * 2018-09-17 2019-02-19 深算科技(重庆)有限公司 Deep learning distributed arithmetic method, apparatus, computer equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111857734A (en) * 2020-06-19 2020-10-30 苏州浪潮智能科技有限公司 Deployment and use method of distributed deep learning model platform
CN111897828A (en) * 2020-07-31 2020-11-06 广州视源电子科技股份有限公司 Data batch processing implementation method, device, equipment and storage medium
CN112015553A (en) * 2020-08-27 2020-12-01 深圳壹账通智能科技有限公司 Data processing method, device, equipment and medium based on machine learning model
CN115756875A (en) * 2023-01-06 2023-03-07 山东未来网络研究院(紫金山实验室工业互联网创新应用基地) Online service deployment method and system of machine learning model for streaming data

Similar Documents

Publication Publication Date Title
CN110365752B (en) Service data processing method and device, electronic equipment and storage medium
CN111200606A (en) Deep learning model task processing method, system, server and storage medium
CN107729139B (en) Method and device for concurrently acquiring resources
US11080090B2 (en) Method and system for scalable job processing
CN110958281B (en) Data transmission method and communication device based on Internet of things
AU2019256257B2 (en) Processor core scheduling method and apparatus, terminal, and storage medium
CN110933075B (en) Service calling method and device, electronic equipment and storage medium
JP6758139B2 (en) Systems and methods for efficient call processing
CN113703997A (en) Bidirectional asynchronous communication middleware system integrating multiple message agents and implementation method
WO2022257247A1 (en) Data processing method and apparatus, and computer-readable storage medium
CN111835797A (en) Data processing method, device and equipment
CN111552577B (en) Method for preventing invalid request from occurring and storage medium
WO2019201111A1 (en) Information processing method, apparatus and device, and computer-readable storage medium
CN111966502A (en) Method and device for adjusting number of instances, electronic equipment and readable storage medium
CN111190731A (en) Cluster task scheduling system based on weight
CN115994156A (en) Method and system for real-time analysis of data streams
CN114374657A (en) Data processing method and device
CN115250276A (en) Distributed system and data processing method and device
CN109639795B (en) Service management method and device based on AcitveMQ message queue
CN113760693A (en) Method and apparatus for local debugging of microservice systems
CN113722115A (en) Method, device, equipment and computer readable medium for calling interface
CN112104679B (en) Method, apparatus, device and medium for processing hypertext transfer protocol request
CN110933122A (en) Method, apparatus, and computer storage medium for managing server
CN117076057B (en) AI service request scheduling method, device, equipment and medium
CN110740151A (en) micro-service adjusting method, device, server and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200526