CN114389998A - Flow distribution method, system, computer equipment and storage medium - Google Patents

Flow distribution method, system, computer equipment and storage medium Download PDF

Info

Publication number
CN114389998A
CN114389998A CN202111569109.3A CN202111569109A CN114389998A CN 114389998 A CN114389998 A CN 114389998A CN 202111569109 A CN202111569109 A CN 202111569109A CN 114389998 A CN114389998 A CN 114389998A
Authority
CN
China
Prior art keywords
user
organization
partition
code
designated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111569109.3A
Other languages
Chinese (zh)
Inventor
文师明
张华�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Aozhe Network Technology Co ltd
Original Assignee
Shenzhen Aozhe Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Aozhe Network Technology Co ltd filed Critical Shenzhen Aozhe Network Technology Co ltd
Priority to CN202111569109.3A priority Critical patent/CN114389998A/en
Publication of CN114389998A publication Critical patent/CN114389998A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/20Traffic policing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/10Network architectures or network communication protocols for network security for controlling access to devices or network resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/34Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters 

Abstract

The embodiment of the invention discloses a flow distribution method, a flow distribution system, computer equipment and a storage medium, wherein the method is applied to a software as a service (SaaS) system, the SaaS system comprises a Nginx gateway, at least one organization and at least one designated partition, at least one organization is distributed under each designated partition, the organization comprises at least one user, and the method comprises the following steps: receiving a request sent by a user, and determining an organization to which the user corresponding to the user request belongs; and sending the user request to a designated partition to which the organization belongs for processing. According to the technical scheme provided by the embodiment of the invention, the flow is distributed to different designated partitions according to the organization to which the user request belongs, so that application logic isolation of different users is realized, and the data security of the users is improved.

Description

Flow distribution method, system, computer equipment and storage medium
Technical Field
The present invention relates to the field of computer network communication, and in particular, to a traffic distribution method, system, computer device, and storage medium.
Background
In the related technology, for a multi-tenant SaaS system application, with the development of the internet, the rapid growth of users and the rapid development of services, and the linear growth of system access volume, the system load pressure is increased more and more, and a severe test is brought to software service providers. It is currently common to use Nginx for load balancing, distributing requests to internal server clusters. The default use load balancing policy of Nginx is weighted round robin. The weighted polling algorithm is based on the polling algorithm, and different weight values are distributed to each server according to different processing capacities of the servers, so that the servers can receive service requests with corresponding weight numbers. The current distribution method realizes load balancing, but has the problem of tenant data security.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art. Therefore, the invention provides a traffic distribution method, a traffic distribution system, a computer device and a storage medium, which can improve the security of user data.
The flow distribution method according to the first aspect of the present invention is applied to a SaaS system, where the SaaS system includes a Nginx gateway, at least one organization, and at least one designated partition, and at least one organization is allocated under each designated partition, and the organization includes at least one user, and the method includes:
receiving a request sent by a user, and determining an organization to which the user corresponding to the user request belongs;
and sending the user request to a designated partition to which the organization belongs for processing.
The traffic distribution method provided by the embodiment of the invention at least has the following beneficial effects: the organization to which the user belongs is determined according to the user request, and the flow is distributed to different designated partitions according to different organizations, so that application logic isolation of different users is realized, and the data security of the users is improved.
According to some embodiments of the invention, said sending said user request into a designated partition to which said organization belongs for processing comprises:
determining a user organization code from the organization; determining a designated partition address corresponding to the user organization code according to the user organization code;
and sending the user request to the designated partition for processing based on the designated partition address.
According to some embodiments of the invention, the determining the designated partition address corresponding to the user organization code according to the user organization code comprises:
determining a partition code corresponding to the user organization code according to the user organization code;
and determining a designated partition address corresponding to the partition code according to the partition code.
According to some embodiments of the invention, the determining the designated partition address corresponding to the user organization code according to the user organization code comprises:
and acquiring a corresponding specified partition address from a first preset address mapping table according to the user organization code.
According to some embodiments of the invention, the user organization code is obtained by:
and acquiring a user registration name, and determining the user organization code according to the user registration name.
According to some embodiments of the invention, the determining the specified partition address corresponding to the partition encoding according to the partition encoding comprises:
and acquiring a corresponding appointed partition address from a second preset address mapping table according to the partition code.
According to some embodiments of the invention, the user organization code is further obtained by:
and acquiring a user network address, and determining the user organization code according to the user network address.
The SaaS system comprises a Nginx gateway, at least one organization and at least one designated partition, wherein at least one organization is distributed under each designated partition, and the organization comprises at least one user;
the Nginx gateway receives a request sent by a user and determines an organization to which the user corresponding to the user request belongs;
and the Nginx gateway sends the user request to a specified partition to which the organization belongs for processing.
A computer device according to an embodiment of the third aspect of the present invention comprises a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method according to any one of the embodiments of the first aspect of the present invention when executing the computer program.
A storage medium according to an embodiment of the fourth aspect of the invention is a computer-readable storage medium having stored thereon computer-executable instructions for performing the method according to any one of the embodiments of the first aspect of the invention.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The invention is further described with reference to the following figures and examples, in which:
fig. 1 is a schematic diagram of a system architecture for performing a traffic distribution method according to an embodiment of the present invention;
fig. 2 is a flowchart of a traffic distribution method according to an embodiment of the present invention;
fig. 3 is a flowchart of a traffic distribution method according to another embodiment of the present invention;
fig. 4 is a flowchart of a traffic distribution method according to another embodiment of the present invention;
fig. 5 is a flowchart of a traffic distribution method according to an exemplary embodiment of the present invention;
fig. 6 is a flowchart of a traffic distribution method according to an example two of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, it should be understood that the orientation or positional relationship referred to in the description of the orientation, such as the upper, lower, front, rear, left, right, etc., is based on the orientation or positional relationship shown in the drawings, and is only for convenience of description and simplification of description, and does not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
In the description of the present invention, the meaning of a plurality is one or more, the meaning of a plurality is two or more, and the above, below, exceeding, etc. are understood as excluding the present numbers, and the above, below, within, etc. are understood as including the present numbers. If the first and second are described for the purpose of distinguishing technical features, they are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present invention, unless otherwise explicitly limited, terms such as arrangement, installation, connection and the like should be understood in a broad sense, and those skilled in the art can reasonably determine the specific meanings of the above terms in the present invention in combination with the specific contents of the technical solutions.
In the description of the present invention, reference to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
First, several terms related to the present invention are analyzed:
SaaS system (Software-as-a-Service): meaning software as a service, i.e. providing a software service over a network. The SaaS platform supplier uniformly deploys the application software on the server of the SaaS platform supplier, a user can order the required application software service from a manufacturer through the Internet according to the actual working requirement, the cost is paid to the manufacturer according to the amount and time of the ordered service, and the service provided by the SaaS platform supplier is obtained through the Internet.
Nginx: a high-performance reverse proxy server is installed at a destination host and is mainly used for forwarding client requests, a plurality of http servers are arranged in a background to provide services, and Nginx has the function of forwarding the requests to subsequent servers and deciding which target host to process the current requests.
Lua: the script language is a compact, light-weight and extensible script language and is also the script language with the highest number performance.
Currently, server information needs to be set in a Nginx configuration file in a Nginx general load balancing manner, a polling server list is written in an upstream object in http, and the increase and decrease of cluster servers need to manually update the configuration file of the Nginx, which is troublesome and easy to make mistakes. While there is no solution for scheduling a set of traffic requests of the same characteristics to a given server. And isolation is not performed between users, so that user data is acquired and tampered inadvertently or maliciously by others.
Based on this, embodiments of the present invention provide a traffic distribution method, a system, a computer device, and a storage medium, where traffic is distributed to different servers according to request characteristics, so as to implement application logic isolation for different users, and improve data security of the users.
The technical scheme provided by the embodiment of the invention is different from the existing most of flow scheduling systems, and most of the systems are scheduled based on load balancing strategies such as flow black and white lists, percentages and the like. According to the scheme, the flow of the same organization is dispatched to the same group of servers through partition strategy load balancing, so that application logic isolation of different organizations is realized, and the technical problem that a flow load balancing method in the prior art is single in balancing strategy is solved. The scheme is realized based on Nginx and lua scripts, the principle is simple, no business code is invaded, the scheme is irrelevant to business developers, and users do not feel. The load balancing scheduling strategy is processed by the lua, the online and offline processes of the cluster server are fully automated, Nginx configuration is not required to be modified, and manual operation error accidents are reduced.
The embodiments of the present invention will be further explained with reference to the drawings.
As shown in fig. 1, fig. 1 is a schematic diagram of a system architecture for performing a traffic distribution method according to an embodiment of the present invention. In the example of fig. 1, the system architecture includes at least one organization 110, a nginnx gateway 120, and at least one designated partition 130. It should be noted that at least one organization is assigned under each designated partition, and the organization includes at least one user.
Those skilled in the art will appreciate that the architecture of the apparatus shown in fig. 1 is not intended to be limiting of the system architecture, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
Various embodiments of the present invention are proposed based on the hardware structure of the above system architecture.
As shown in fig. 2, fig. 2 is a flowchart of a traffic distribution method according to an embodiment of the present invention. The traffic distribution method of the embodiment of the present invention includes, but is not limited to, step S110 and step S120.
Step S110, receiving a request sent by a user, and determining an organization to which the user corresponding to the user request belongs;
step S120, the user request is sent to the designated partition of the organization.
Specifically, users are mapped to an organization, data and resources of the same user belong to the organization, and the Nginx gateway sends a user request to a designated partition to which the organization belongs for processing. By forwarding the requests of the same organization in all accesses to the exclusive partition system, the data and resources among different organizations are completely isolated and do not influence each other, and the safety of user data is effectively improved.
Referring to fig. 3, step S120 may be specifically implemented by the following steps:
step S210, determining a user organization code according to the organization;
step S220, determining a designated partition address corresponding to the user organization code according to the user organization code;
step S230, the user request is sent to the designated partition for processing based on the designated partition address.
Specifically, each organization generates a designated organization code for it at the beginning of registration and assigns a designated partition. The Nginx gateway can determine a corresponding designated partition address according to the user organization code, and send a user request to the designated partition for processing based on the designated partition address.
The organization means an enterprise, a group, or the like.
It will be appreciated that after a user is registered, the user organization code may be determined from the user registration name. The user organization code may also be determined based on the user network address.
Referring to fig. 4, step S220 may be specifically implemented by the following steps:
step S310, determining a partition code corresponding to the user organization code according to the user organization code;
step S320, determining a designated partition address corresponding to the partition code according to the partition code.
Specifically, the nginnx gateway accesses the corresponding relationship between the user organization code and the partition code, can find the partition code through the user organization code, and further finds the designated partition according to the partition code.
It should be noted that the correspondence may be stored locally in the Nginx, or may be stored in any accessible place, such as a database.
Specifically, the nginnx gateway obtains a corresponding designated partition address from the second preset address mapping table according to the partition code.
It can be understood that the nginnx gateway obtains the corresponding designated partition address from the first preset address mapping table according to the user organization code.
The first preset address mapping table and the second preset address mapping table may be stored in any accessible place, such as a database.
The traffic distribution method of the present invention is illustrated below by two practical examples.
Example one, referring to fig. 5, the method includes:
nginx receives all user requests:
(1) monitoring a domain name:
server_name h3yun.com;
(2) associating the backend application:
set$backend www.h3yun.com;
rewrite_by_lua_file loadbalance.lua;
proxy_pass http://$backend;
2. request identification:
analyzing the organization code of the current requesting user from the user request by the lua script;
3. searching for the IP of the specified partition:
and searching the IP of the designated partition of the organization according to the organization code EngineCode.
4. Request forwarding:
nginx forwards the request to the designated partition to which the user organization belongs.
Example two, referring to fig. 6, the method includes:
nginx receives a user request;
analyzing the user request by the lua script to obtain the current user organization code:
engineCode=ngx.var.cookie_EngineCode or ngx.var.get_uri_args()[“EngineCode”];
obtaining partition codes of organizations according to egineCodes:
shardKey=ngx.shared._ups_zone:get(engineCode);
finally, the lua script accesses the address mapping table to obtain the IP of the designated partition through http request:
webip=http:request_uri(“/GetWebIpByShardKey”,{method=“GET”});
then lua sets Nginx variable ngx.var.back ═ webip;
nginx forwards the user request to the designated partition to which the organization belongs.
One embodiment of the invention provides a SaaS system, which comprises a Nginx gateway, at least one organization and at least one designated partition, wherein the at least one organization is distributed under each designated partition and comprises at least one user;
the Nginx gateway receives a request sent by a user and determines an organization to which the user corresponding to the user request belongs;
the Nginx gateway sends the user request to a designated partition to which the organization belongs for processing.
According to the SaaS system provided by the embodiment of the invention, the flow is distributed to different designated partitions according to the organization to which the user request belongs, so that application logic isolation of different users is realized, and the data security of the users is improved.
For a specific execution step of the SaaS system, reference is made to the above-mentioned traffic distribution method, which is not described herein again.
An embodiment of the present invention further provides a computer device, including: at least one processor, and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any of the method embodiments described above.
Furthermore, an embodiment of the present invention also provides a computer-readable storage medium storing computer-executable instructions for execution by one or more control processors to perform the method in the above-described method embodiment, for example, to perform the above-described method steps S110 to S120 in fig. 2, method steps S210 to S230 in fig. 3, and method steps S310 to S320 in fig. 4.
The above-described embodiments of the apparatus are merely illustrative, and the units illustrated as separate components may or may not be physically separate, may be located in one place, or may be distributed over a plurality of network nodes. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
The terms "first," "second," "third," "fourth," and the like in the description of the invention and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It is to be understood that, in the present invention, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes multiple instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing programs, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention. Furthermore, the embodiments of the present invention and the features of the embodiments may be combined with each other without conflict.

Claims (10)

1. A flow distribution method is applied to a SaaS system, wherein the SaaS system comprises a Nginx gateway, at least one organization and at least one designated partition, at least one organization is allocated under each designated partition, the organization comprises at least one user, and the method comprises the following steps:
receiving a request sent by a user, and determining an organization to which the user corresponding to the user request belongs;
and sending the user request to a designated partition to which the organization belongs for processing.
2. The traffic distribution method according to claim 1, wherein said sending the user request to a designated partition to which the organization belongs comprises:
determining a user organization code from the organization;
determining a designated partition address corresponding to the user organization code according to the user organization code;
and sending the user request to the designated partition for processing based on the designated partition address.
3. The traffic distribution method according to claim 2, wherein said determining the designated partition address corresponding to the user organization code according to the user organization code comprises:
determining a partition code corresponding to the user organization code according to the user organization code;
and determining a designated partition address corresponding to the partition code according to the partition code.
4. The traffic distribution method according to claim 2, wherein said determining the designated partition address corresponding to the user organization code according to the user organization code comprises:
and acquiring a corresponding specified partition address from a first preset address mapping table according to the user organization code.
5. A traffic distribution method according to claim 2, characterized in that said user organization code is obtained by the steps comprising:
and acquiring a user registration name, and determining the user organization code according to the user registration name.
6. The traffic distribution method according to claim 3, wherein said determining the designated partition address corresponding to the partition code according to the partition code comprises:
and acquiring a corresponding appointed partition address from a second preset address mapping table according to the partition code.
7. A traffic distribution method according to claim 2, characterized in that said user organization code is further obtained by the steps comprising:
and acquiring a user network address, and determining the user organization code according to the user network address.
8. The SaaS system is characterized by comprising a Nginx gateway, at least one organization and at least one designated partition, wherein at least one organization is distributed under each designated partition and comprises at least one user;
the Nginx gateway receives a request sent by a user and determines an organization to which the user corresponding to the user request belongs;
and the Nginx gateway sends the user request to a specified partition to which the organization belongs for processing.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of claims 1 to 7 when executing the computer program.
10. A storage medium, which is a computer-readable storage medium, characterized in that computer-executable instructions are stored for performing the method of any one of claims 1 to 7.
CN202111569109.3A 2021-12-21 2021-12-21 Flow distribution method, system, computer equipment and storage medium Pending CN114389998A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111569109.3A CN114389998A (en) 2021-12-21 2021-12-21 Flow distribution method, system, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111569109.3A CN114389998A (en) 2021-12-21 2021-12-21 Flow distribution method, system, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114389998A true CN114389998A (en) 2022-04-22

Family

ID=81197872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111569109.3A Pending CN114389998A (en) 2021-12-21 2021-12-21 Flow distribution method, system, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114389998A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160323191A1 (en) * 2015-05-01 2016-11-03 Google Inc. System and method for granular network access and accounting
US20180300360A1 (en) * 2015-12-22 2018-10-18 Alibaba Group Holding Limited Data information processing method and data storage system
CN109617718A (en) * 2018-12-06 2019-04-12 平安科技(深圳)有限公司 The traffic management and control method, apparatus and storage medium of SAAS cloud platform
CN110109931A (en) * 2017-12-27 2019-08-09 航天信息股份有限公司 It is a kind of for preventing the method and system that data access clashes between RAC example
CN110519380A (en) * 2019-08-29 2019-11-29 北京旷视科技有限公司 A kind of data access method, device, storage medium and electronic equipment
US11089095B1 (en) * 2020-08-21 2021-08-10 Slack Technologies, Inc. Selectively adding users to channels in a group-based communication system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160323191A1 (en) * 2015-05-01 2016-11-03 Google Inc. System and method for granular network access and accounting
US20180300360A1 (en) * 2015-12-22 2018-10-18 Alibaba Group Holding Limited Data information processing method and data storage system
CN110109931A (en) * 2017-12-27 2019-08-09 航天信息股份有限公司 It is a kind of for preventing the method and system that data access clashes between RAC example
CN109617718A (en) * 2018-12-06 2019-04-12 平安科技(深圳)有限公司 The traffic management and control method, apparatus and storage medium of SAAS cloud platform
CN110519380A (en) * 2019-08-29 2019-11-29 北京旷视科技有限公司 A kind of data access method, device, storage medium and electronic equipment
US11089095B1 (en) * 2020-08-21 2021-08-10 Slack Technologies, Inc. Selectively adding users to channels in a group-based communication system

Similar Documents

Publication Publication Date Title
CN106899680B (en) The fragment treating method and apparatus of multi-tiling chain
US20200364608A1 (en) Communicating in a federated learning environment
US11409719B2 (en) Co-locating microservice persistence containers within tenant-specific database
US20220357972A1 (en) System for managing and scheduling containers
US10185601B2 (en) Software defined SaaS platform
US9407572B2 (en) Multiple cloud marketplace aggregation
US9979647B2 (en) Periodic advertisements of host capabilities in virtual cloud computing infrastructure
US9466036B1 (en) Automated reconfiguration of shared network resources
US8639792B2 (en) Job processing system, method and program
CN109871224A (en) A kind of gray scale dissemination method, system, medium and equipment based on user identifier
US9489231B2 (en) Selecting provisioning targets for new virtual machine instances
US20110166952A1 (en) Facilitating dynamic construction of clouds
CN104937584A (en) Providing optimized quality of service to prioritized virtual machines and applications based on quality of shared resources
CN108881368A (en) High concurrent service request processing method, device, computer equipment and storage medium
CN108933829A (en) A kind of load-balancing method and device
CN110781015A (en) Message queue distribution method, device, equipment and computer readable storage medium
CN112445774A (en) Distributed shared file system and data processing method thereof
US7478396B2 (en) Tunable engine, method and program product for resolving prerequisites for client devices in an open service gateway initiative (OSGi) framework
US20150106428A1 (en) System and method for collaborative processing of service requests
CN107045452B (en) Virtual machine scheduling method and device
CN114389998A (en) Flow distribution method, system, computer equipment and storage medium
CN115329005A (en) Multi-cluster cooperation method, system, device and computer readable storage medium
CA2677367C (en) Interface module
CN110782211A (en) Data processing method and device, electronic equipment and storage medium
US11941421B1 (en) Evaluating and scaling a collection of isolated execution environments at a particular geographic location

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination