CN113301080A - Resource calling method, device, system and storage medium - Google Patents

Resource calling method, device, system and storage medium Download PDF

Info

Publication number
CN113301080A
CN113301080A CN202010526842.6A CN202010526842A CN113301080A CN 113301080 A CN113301080 A CN 113301080A CN 202010526842 A CN202010526842 A CN 202010526842A CN 113301080 A CN113301080 A CN 113301080A
Authority
CN
China
Prior art keywords
node
protocol port
target
computing node
computing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010526842.6A
Other languages
Chinese (zh)
Other versions
CN113301080B (en
Inventor
杜万
储静辉
倪俊佳
谭贺贺
杨苏博
张金红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010526842.6A priority Critical patent/CN113301080B/en
Publication of CN113301080A publication Critical patent/CN113301080A/en
Application granted granted Critical
Publication of CN113301080B publication Critical patent/CN113301080B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/141Setup of application sessions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)

Abstract

The embodiment of the application provides a resource calling method, equipment, a system and a storage medium. In the embodiment of the application, the agent node is additionally arranged on the cloud application side, and the agent node and the computing node are in communication connection through the computing node to reversely map the protocol port of the agent node to the protocol port of the agent node. Therefore, the proxy node can call the computing node as the computing resource of the cloud application based on the mapping relation between the protocol port of the proxy node and the protocol port of the computing node, the computing node and the cloud application do not need to belong to the same cloud manufacturer, the cloud application and the computing node are decoupled, and the flexibility of the cloud application is improved.

Description

Resource calling method, device, system and storage medium
Technical Field
The present application relates to the field of cloud computing technologies, and in particular, to a resource calling method, device, system, and storage medium.
Background
In recent years, cloud computing is rapidly developed, the cloud of data gradually becomes a mainstream trend, and the cloud of data is generated in application. The cloud application relies on a cloud server and a browser to provide relevant services for users. For example, the WebIDE cloud development platform can rely on a cloud server and a browser to provide an integrated development environment for a user to develop programs and the like.
In the prior art, the cloud application relies on various computing resources, and the computing resources are tightly coupled with the cloud application, so that the use flexibility of the cloud application is limited.
Disclosure of Invention
Aspects of the present application provide a resource calling method, device, system, and storage medium, which are used to decouple a cloud application from a computing resource, thereby improving flexibility of use of the cloud application.
An embodiment of the present application provides a cloud application system, including: the system comprises a cloud application node, a proxy node and at least one computing node; the cloud application node is used for providing application services; the agent node and the at least one computing node establish communication connection by the at least one computing node reverse mapping its protocol port to the protocol port of the agent node; the port of the proxy node is a protocol port registered by a user in advance;
the proxy node maintains the mapping relation between the protocol port of the proxy node and the protocol port of the computing node;
the proxy node is configured to invoke the at least one computing node as a computing resource of the cloud application node based on a mapping relationship between a protocol port of the proxy node and a protocol port of a computing node.
The embodiment of the present application further provides a resource invoking method, which is applicable to a service node, where a communication connection between the service node and a computing node is established by the computing node mapping its protocol port in a reverse direction to the protocol port of the service node, and the method includes:
acquiring an access request; and calling the computing node to process the access request based on the mapping relation between the protocol port of the service node and the protocol port of the computing node.
An embodiment of the present application further provides a server device, where the server device includes: a memory and a processor; wherein the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for performing the steps in the above-described resource calling method;
the communication connection between the server device and the computing node is established by the computing node mapping its protocol port back to the protocol port of the server device.
Embodiments of the present application also provide a computer-readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps in the above-mentioned resource calling method.
In the embodiment of the application, the agent node is additionally arranged on the cloud application side, and the agent node and the computing node are in communication connection through the computing node to reversely map the protocol port of the agent node to the protocol port of the agent node. Therefore, the proxy node can call the computing node as the computing resource of the cloud application based on the mapping relation between the protocol port of the proxy node and the protocol port of the computing node, the computing node and the cloud application do not need to belong to the same cloud manufacturer, the cloud application and the computing node are decoupled, and the flexibility of the cloud application is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1a and 1b are schematic structural diagrams of a conventional cloud application system;
fig. 1c, fig. 1d, and fig. 1e are schematic structural diagrams of a cloud application system provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a resource calling method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a server device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Aiming at the technical problem that the cloud application has poor flexibility due to tight coupling of the existing computing resources and the cloud application, in some embodiments of the application, a proxy node is additionally arranged on a cloud application side, and a protocol port of the proxy node is reversely mapped to a protocol port of the proxy node through the computing node between the proxy node and the computing node to establish communication connection. Therefore, the proxy node can call the computing node as the computing resource of the cloud application based on the mapping relation between the protocol port of the proxy node and the protocol port of the computing node, the computing node and the cloud application do not need to belong to the same cloud manufacturer, the cloud application and the computing node are decoupled, and the flexibility of the cloud application is improved.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
It should be noted that: like reference numerals refer to like objects in the following figures and embodiments, and thus, once an object is defined in one figure or embodiment, further discussion thereof is not required in subsequent figures and embodiments.
In practical applications, the WebIDE platform needs to provide two core capabilities from the user's perspective: editing capabilities and running capabilities. Among them, the editing capability requires the back-end service to provide the Application Programming Interface (API) for the front-end service to add, read, Update, and Delete (CRUD) to the workspace file. Among other things, back-end services provide computing resources. The front-end service needs to implement the editing capability to provide code for a single file. The run capability provides the user with the ability to run programs and debug programs.
Since a user can write a file system and execute an arbitrary program, the file system, the program execution space, and the network environment need to be isolated for security. In the prior art, high-security virtual machine isolation is adopted, or low-cost container isolation is adopted. As shown in fig. 1a and 1b, the container and VM, as computing resources, both address the problem of resource isolation, although differing in security. However, in the prior art, the computing resources such as the container or the VM need to belong to the same account of the cloud application service, that is, the front-end service needs to belong to the same account as the back-end computing resources. For example, a front-end service node and a back-end computing node of the cloud application service both belong to the same cloud developer, and the like. This tight coupling of back-end computing resources with front-end application services limits the flexibility of use of cloud applications.
On the other hand, different users have different requirements on the scale and the security of the computing resources, but the computing resources provided by the same cloud developer, no matter the computing resources are containers or virtual machines, have single scale and security, and thus the requirements of different users on the computing resources cannot be met, and the use flexibility of the cloud application is further limited.
In order to solve the above problem, an embodiment of the present application provides a new cloud application system architecture. The cloud application system provided by the embodiment of the application is exemplarily described below.
Fig. 1c is a schematic structural diagram of a cloud application system provided in the embodiment of the present application. As shown in fig. 1c, the cloud application system includes: a cloud application node 11, a proxy node 12, and at least one compute node 13.
In this embodiment, the number of the cloud application nodes 11, the proxy nodes 12, and the computing nodes 13 may be 1 or more. In the embodiments of the present application, a plurality means 2 or more. The cloud application node 11, the agent node 12, and the computing node 13 may be software modules, applications, services, or a physical device that provides application services for users. The plurality of cloud application nodes 11 may be deployed on different physical machines, or may be deployed in different containers, container groups, or Virtual Machines (VMs). Of course, these containers, container groups, or virtual machines may be deployed on the same physical machine, or may be deployed on multiple different physical machines. Of course, the plurality of agent nodes 12 may be deployed on different physical machines, or may be deployed in different containers, container groups, or virtual machines. These containers or virtual machines may be deployed on the same physical machine or on multiple different physical machines. Similarly, the plurality of computing nodes 13 may also be deployed on different physical machines, or may also be deployed in different containers, container groups, or virtual machines. These containers, groups of containers, or virtual machines may be deployed on the same physical machine or on multiple different physical machines.
Alternatively, the cloud application node 11, the agent node 12, and the computing node 13 may be deployed on different physical machines, or may be deployed in different containers, container groups, or virtual machines. These containers, groups of containers, or virtual machines may be deployed on the same physical machine or on multiple different physical machines. Accordingly, the compute nodes 13 may be implemented as container instances, container group instances, or virtual machine instances, among others.
The physical Machine may be a single server device, a cloud server array, or a Virtual Machine (VM) running in the cloud server array. In addition, the physical machine may also refer to other computing devices with corresponding service capabilities, for example, a terminal device (running a service program) such as a computer.
In the case where the cloud application node 11 and the computing node 13 are deployed in different physical machines, the cloud application node 11 and the computing node 13 may be located in the same network. For example, the cloud application node 11 and the computing node 13 are located in the same local area network, or both the cloud application node 11 and the computing node 13 are located in a wide area network (public network). Of course, the cloud application node 11 and the computing node 13 may be located in different networks. For example, the computing node 13 is located in a local area network, the cloud application node 11 is located in a wide area network (public network); alternatively, the cloud application node 11 and the computing node 13 are located in different local area networks. The local area network may be a Virtual Private Cloud (VPC) network.
In this embodiment of the application, if the computing node 13 and the cloud application node 11 are deployed in the same network, the cloud application node 11 may invoke the computing node 13 through the private IP address of the computing node 13. However, if the cloud application node 11 and the computing node 13 are located in different networks, the cloud application node 11 cannot call the computing node 13 located in another network.
In order to solve the above problem, in the present embodiment, a proxy node 12 is added to the network in which the cloud application node 11 is located. In this way, the cloud application node 11 is located in the same network as the proxy node 12. For example, the cloud application node 11 and the proxy node 12 are located in the same local area network, or the cloud application node 11 and the proxy node 12 are both located in a wide area network (public network). When the agent node 12 and the cloud application node 11 are deployed in different physical machines, the cloud application node 11 and the agent node 12 may be connected wirelessly or through wires. For example, the cloud application node 11 and the proxy node 12 may be connected by a network cable or a communication fiber. Or, the cloud application node 11 and the proxy node 12 may be connected through a mobile network, and accordingly, the network format of the mobile network may be any one of 2G (gsm), 2.5G (gprs), 3G (WCDMA, TD-SCDMA, CDMA2000, UTMS), 4G (LTE), 4G + (LTE +), 5G, WiMax, and the like. Optionally, the cloud application node 11 and the proxy node 12 may also be connected by bluetooth, WiFi, infrared, or the like.
In the present embodiment, the cloud application node 11 is used to provide an application service. In different application scenarios, the application services provided by the cloud application node 11 are different. In some embodiments, the cloud application node 11 may provide an Integrated Development Environment (IDE). Further, the cloud application node 11 may be implemented as a browser-based integrated development environment, i.e., WebIDE. For WebIDE, a user can write codes only by a browser, run the codes in a terminal environment provided by the WebIDE and start a cloud development mode. Accordingly, the cloud application node 11 may provide a code editing service.
In this embodiment, the proxy node 12 may expose its protocol port to the compute node 13. At least one of the computing nodes 13 may belong to the same user or may belong to different users. The user can log in the cloud application node 11 and register the protocol port of the proxy node 12 in advance. For the cloud application node 11, a registration request may be received, the registration request including a user identification. The user identifier may be information that uniquely identifies a registered user, for example, the user identifier may be a registered account or a user name, but is not limited thereto. Further, the cloud application node 11 configures a protocol port of the proxy node 12 for the registered user corresponding to the registration request. Different protocol ports may be configured for different registered users in order to distinguish between different registered users. Accordingly, the cloud application node 11 may maintain a correspondence between the protocol port and the user identification of the proxy node 12. Based on the correspondence, the cloud application node 11 may determine the protocol port of the proxy node 12 corresponding to each registered user.
For protocol ports, ports 0-1023 are known ports; ports 1024 and 65535 are registration ports. The protocol ports of the proxy node 12, which are registered by the user in advance, are registered ports, i.e. one or more of the ports 1024 and 65535. The proxy node 12 may expose the protocol port that the user has previously registered to the compute nodes 13 affiliated with the user. In the case where the proxy node 12 is located in a wide area network (public network) and the computing node 13 is located in a local area network (internal network), the proxy node 12 is accessible to the computing node 13, but the proxy node 12 cannot access the computing node 13.
Based on this, the computing node 13 can map its own protocol port back to the protocol port of the proxy node 12, so that the proxy node 12 can provide the data received from its protocol port to the corresponding port of the computing node 13. The specific implementation manner of the computing node 13 reversely mapping its own protocol port to the protocol port of the proxy node 12 is as follows: the compute node 13 may initiate a network connection request to the proxy node 12; the network connection request includes a first protocol port of the compute node 13 and a second protocol port of the proxy node 12. Alternatively, as shown in FIG. 1c, the compute node 13 may send a network connection request to the 22 port of the proxy node 12. Wherein the port 22 of the proxy node 12 may provide telnet services. For the network connection request, the second protocol port of the proxy node 12 registers the protocol port of the proxy node 12 in advance for the user to which the computing node 13 belongs. The first protocol port of the computing node 13 may be a well-known port. For example, the first protocol port of the computing node 13 may be an 80 port, etc.
Correspondingly, the proxy node 13 may establish a mapping relationship between the protocol port of the proxy node and the protocol port of the computing node according to the first protocol port of the computing node 13 and the second protocol port of the proxy node 12; and based on the mapping relationship, establishing a communication connection between the agent node 12 and the computing node 13, thereby forming a communication tunnel between the agent node 12 and the computing node 13. In this way, the proxy node 12 may provide data received from its second protocol port to the first protocol port of the compute node 13.
In the embodiment of the present application, the tunneling protocol to be followed between the proxy node 12 and the compute node 13 is not limited. Alternatively, the tunneling protocol may be, but is not limited to, a TCP protocol, an SSH protocol, or a VPN protocol, etc. The following describes a manner in which the proxy node 12 establishes a communication connection with the computing node 13, taking the SSH protocol as an example.
Further, in the case where the tunneling protocol followed between the proxy node 12 and the computing node 13 is the SSH protocol, the network connection request initiated by the computing node 13 to the proxy node 12 follows the SSH protocol. The proxy node 12 exposes its protocol port to the compute node 13. Alternatively, as shown in fig. 1c, the proxy node 12 may expose the protocol port to the compute node 13 through an Application Load Balance (ALB) node 14. In this way, the compute node 13 has access to the protocol port of the proxy node 12 via the TCP protocol.
Further, the computing node 13 runs the R parameter instruction: HostA $ ssh-R PortA: localhost: PortB HostB. In the R parameter instruction, HostA refers to the host name of the computing node 13, and PostA is the port number of the first protocol port of the computing node 13; HostB refers to the host name of the proxy node 12 and PortB is the port number of the second protocol port of the proxy node 12. After the computing node 13 successfully runs the R parameter instruction, the network connection request may be sent to the proxy node 13; the network connection request is used to instruct the proxy node 12 to listen to its second protocol port. Accordingly, the proxy node 12 adds, in its/etc/ssh/sshd _ config file, in response to the network connection request: gateway ports yes. To this end, the computing node 13 maps its first protocol port to a second protocol port of the proxy node 12, and establishes an SSH tunnel between the proxy node 12 and the computing node 13. The SSH tunnel is a unidirectional tunnel, i.e. after the SSH tunnel is established, the proxy node 12 may provide the data it received from the second protocol port to the compute node 13, but the compute node 13 cannot actively request the data from the proxy node 12.
For the proxy node 12, a mapping between the second protocol port of the proxy node 12 and the first protocol port of the compute node 13 may be established. The agent node 12 may also monitor the second protocol port thereof, and forward the received data to the computing node 13 based on the mapping relationship between the second protocol port of the agent node 12 and the first protocol port of the computing node 13 when the data received by the second protocol port is monitored.
In the embodiment of the present application, since the computing node 13 provides computing resources for application services provided by the cloud application node 11, the protocol ports of the computing node 13 are known ports, i.e., one or more of the ports 0 to 1023. For example, as shown in fig. 1c, in some embodiments, the access request received by the cloud application node 11 is an http request, and the protocol port of the computing node 13 is an 80 port. Accordingly, the computing node 13 may reverse map its 80 port to a registered port in the proxy node 12, and fig. 1c illustrates only the computing node 13 reverse mapping its 80 port to the 8001 port of the proxy node, which is not limited.
Alternatively, as shown in fig. 1c, the computing node 13 may include a connection unit 13a and a proxy core 13 b. In this embodiment, the proxy core 13b is the real computing resource of the compute node 13. The connection unit 13a is configured to map the protocol port of the proxy core 13b back to the protocol port of the proxy node 12. It is worth noting that in the above or below embodiments, the protocol port of the computing node 13 may be a protocol port of the proxy core 13 b. For a specific implementation of the connection unit 13a reversely mapping the protocol port of the proxy core 13b to the protocol port of the proxy node 12, reference may be made to the content of the computing node 13 reversely mapping the protocol port to the protocol port of the proxy node 12, which is not described herein again.
Correspondingly, the proxy node 13 may establish a mapping relationship between the protocol port of the proxy node and the protocol port of the computing node according to the first protocol port of the computing node 13 and the second protocol port of the proxy node 12; and based on the mapping relationship, establishing a communication connection between the agent node 12 and the computing node 13, thereby forming a communication tunnel between the agent node 12 and the computing node 13. In this way, the proxy node 12 may provide data received from its second protocol port to the first protocol port of the compute node 13.
Based on the above analysis, in the embodiment of the present application, the proxy node 12 maintains a mapping relationship between the protocol port of the proxy node and the protocol port of the compute node; and may invoke at least one computing node 13 as a computing resource for the cloud application node 11 based on the mapping relationship.
In the cloud application system provided in this embodiment, the proxy node is additionally arranged on the cloud application side, and the proxy node and the computing node establish communication connection by mapping a protocol port of the proxy node in a reverse direction to the protocol port of the proxy node through the computing node. Therefore, the proxy node can call the computing node as the computing resource of the cloud application based on the mapping relation between the protocol port of the proxy node and the protocol port of the computing node, the computing node and the cloud application do not need to belong to the same cloud manufacturer, the cloud application and the computing node are decoupled, and the flexibility of the cloud application is improved.
On the other hand, the cloud application system provided by the embodiment of the application can provide computing resources by a user, and the cloud application system solves the problem of network links between cloud application nodes and the computing resources (computing nodes), and does not need to care about which isolation technology is adopted between each computing node, or even whether the computing nodes are isolated or not. As shown in fig. 1d, the computing resource provided by the user may be an Elastic Cloud Server (ECS), an Elastic Container Instance (ECI), a Container or a virtual machine, and may even be a terminal device such as a Personal Computer (PC). That is to say, the computing node and the cloud application service may belong to different accounts, or even may belong to different cloud developers, so that the cloud application and the computing node are decoupled, and the flexibility of the cloud application is improved. The front-end nodes in fig. 1d may be implemented as cloud application nodes 11 and proxy nodes 12 in fig. 1 c.
In the embodiment of the present application, the proxy node 12 may, in a case where the access request is received, invoke the at least one computing node 13 as a computing resource of the cloud application service 11, and provide the access request to the at least one computing node 13 to process the access request. Because the computing nodes of different registered users are isolated from each other, the computing nodes of different registered users are different. For an access request, the access request may contain a user identification of the registered user. Accordingly, the cloud application node 11 obtains the access request, and parses the user identifier from the access request. Further, the cloud application node 11 may determine, according to the user identifier and the correspondence between the protocol port of the proxy node and the user identifier, a target protocol port that is registered in advance by a registered user and corresponds to the user identifier; the access request is provided to the target protocol port of the proxy node 12.
Accordingly, the proxy node 12 obtains the access request from the target protocol port, that is, the proxy node 12 listens to the target protocol port to obtain the access request. Further, the agent node 12 may determine a target computing node from the at least one computing node 13 according to a mapping relationship between the protocol port of the agent node and the protocol port of the computing node and an identifier of the target protocol port; and providing the access request to the target computing node for the target computing node to process the access request. The number of the target computing nodes can be one or more. Plural means 2 or more.
The access requests are different in purpose, and the access requests are processed by the target computing node in different modes. In some embodiments, the access request requires access to a storage resource. Accordingly, as shown in fig. 1e, the cloud application system may further include: and a storage node 15. The number of the storage nodes 15 may be 1 or more. The plurality of storage nodes 15 may be deployed on different physical machines, or may be deployed in different containers, container groups, or virtual machines. Of course, these containers, container groups, or virtual machines may be deployed on the same physical machine, or may be deployed on multiple different physical machines. The storage node 15 may be implemented as a different form of storage node. For example, the storage node 15 may be a Network Attached Storage (NAS) node or an Object Storage Service (OSS) node or the like. Fig. 1e illustrates only the storage node 15 as an NAS node.
In this embodiment, the target computing node may mount the storage node 15 on the proxy node 12 when processing the access request; and accessing the storage node according to the access request. The storage node 15 is implemented in different technologies, and the target computing node is implemented in a different manner for mounting the storage node 15 on the proxy node 12. The storage node 15 is taken as an NAS node implemented based on NAS technology and an OSS node implemented based on OSS technology as examples, and is exemplarily described below.
Alternatively, as shown in fig. 1e, if the storage node 15 is an NAS node, the access request may follow a network file system (NFS protocol). In this way, the target computing node may access the NAS node through a Remote Procedure Call (RPC), that is, by using the NFS protocol, the target computing node may access data in the storage node as local data, thereby implementing the storage node 15 as a storage medium of the proxy node 12 to mount on the proxy node 12. For the condition that the NAS node stores data, the NAS node can be independently deployed and operated without being coupled with a computing node, and the flexibility, stability and safety of the cloud application platform are improved.
Alternatively, if the storage node 15 is an OSS node, the storage node 15 may have an API interface independent of a platform, and the target computing node may call the API interface of the OSS node to access the OSS node, so that the storage node 15 is mounted on the proxy node 12 as a storage medium of the proxy node 12.
Optionally, the target computing node mounts the storage node 15 on the proxy node 12, and performs memory mapping on the proxy node 12, that is, mapping the data stored in the storage node 15 to the virtual address space of the process of the proxy node 12, and establishing a corresponding relationship between the virtual address space and the data stored in the storage node 15. In this way, the agent node 12 can directly access the storage node 15 without the assistance of a Central Processing Unit (CPU) of the agent node 12, which is helpful for improving the access efficiency of the agent node 12 to the storage node 15.
In some embodiments, the access request is a write request; the write request includes data to be written. Correspondingly, when the target computing node accesses the storage node according to the access request, the target computing node is specifically configured to: according to the size of the data to be written, a target storage space is applied in the storage node 15; and writing the data to be written into the target storage space. And the capacity of the target storage space is larger than or equal to the size of the data to be written.
In other embodiments, the access request is a read request; the read request includes the target memory address of the data to be read in the storage node 15. Correspondingly, when the target computing node accesses the storage node according to the access request, the target computing node is specifically configured to: reading data to be read from a storage space pointed by the target storage address; and provides the data to be read to the proxy node 12. Accordingly, the proxy node 12 provides the data to be read to the cloud application node 11, so that the cloud application node 11 outputs the data to be read.
In yet other embodiments, the access request is a query request; the query request contains a query condition. Correspondingly, when the target computing node accesses the storage node according to the access request, the target computing node is specifically configured to: searching target data meeting the query condition in a storage node; and provides the target data to the proxy node 12. Accordingly, the proxy node 12 provides the target data to the cloud application node 11 for the cloud application node 11 to output the target data.
The number of target computing nodes for processing the access request can be 1 or more. The computing node 13 may be an Elastic Cloud Server (ECS), an Elastic Container Instance (ECI), a Container or a virtual machine, or may even be a terminal device such as a Personal Computer (PC). For a single machine or cluster deployment scheme such as an ECS and a personal computer, the deployment has huge machine operation and maintenance cost, and also has the capability of being unable to flexibly allocate resources and freely expand and contract capacity, which inevitably causes a certain resource waste when the access flow is low. Based on this, the computing node 13 is preferably implemented as an ECI, which can implement Serverless computing resources. For a user of the cloud application, the allocation and management of the underlying machine do not need to be concerned, so that the maintenance cost of the whole platform is reduced, and the capacity of automatically expanding or reducing the capacity according to the access flow is improved. ECIs may be scaled when access traffic is low-dipping, helping to conserve computing resources.
An example of how the proxy node 12 invokes the computing node 13 as the computing resource of the cloud application node 11 is described below with the computing node 13 as the ECI. In different application scenarios, the cloud application service provided by the cloud application node 11 is different. Fig. 1e illustrates only the cloud application node 11 providing the WebIDE service, but is not limited thereto.
In the embodiment of the present application, the computing node 13 may be implemented as an ECI. Among other things, ECI may provide a variety of computing services. For example, as shown in fig. 1e, the container provided in this embodiment may include: (1) a link service container, which can establish a link between the cloud application node 11 and the computing node 13, that is, a front-end link and a back-end link of the cloud application system; (2) the core service container can provide core services such as data adding, data reading and writing, data updating and data deleting; (3) compiling a service container, wherein the container can compile the functional components selected by a user on an editing interface into corresponding codes; (4) a terminal service container which can maintain terminal information of registered users, etc.; (5) the language service container can carry out language annotation on the codes compiled by the compiling service container; (6) and the memory mapping container can mount the storage node on the proxy node and perform memory mapping on the proxy node and the storage node.
As shown in fig. 1e, the cloud application node 11 may provide WebIDE services. In fig. 1e, the cloud application node 11 and the proxy node 12 are implemented as WebIDE cloud services. Accordingly, the cloud application node 11 may provide an editing interface. For registered users of WebIDE, the registered users only need a browser to log in to the cloud application node 11. Accordingly, the cloud application node 11 may provide an editing interface to the registered user. The editing interface comprises a functional component, and a user can develop codes by selecting the functional component.
The cloud application node 11 may generate a first access request according to the identification of the selected functional component in response to the selection operation for the functional component. The first access request contains an identification of the user and an identification of the selected functional component. Further, the cloud application node 11 may determine, according to the user identifier, a target protocol port corresponding to the user identifier from the protocol ports of the proxy node 12. Further, the agent node 12 determines the target computing node according to the identifier of the target protocol port and the corresponding relationship between the protocol port of the agent node and the protocol port of the computing node. Here, the target compute node includes the compilation container described above. Further, the proxy node 12 provides the first access request to the target compute node (compilation container). The target computing node (compilation container) compiles the functional component into code corresponding to the functional component.
Optionally, when the target computing node (compilation container) compiles the functional component into the code corresponding to the functional component, the code corresponding to the identifier of the functional component may be read from the preset storage space according to the identifier of the selected functional component, and used as the code corresponding to the selected functional component. Therefore, the visual functional components are communicated with the source codes, visual dragging and editing and source code editing can be realized, the advantages of visual editing and code editing are fully played, and development efficiency is improved. The codes corresponding to the functional components may be stored in the storage node 15, or may be stored in other storage spaces corresponding to the cloud application service.
The memory mapping container may mount the storage node 15 on the proxy node 12 and perform memory mapping on the proxy node 12 and the storage node 15 when receiving the first access request.
Further, the target computing node (the above core service container) may also save the code corresponding to the functional component in the storage node 15. Optionally, the editing interface may further include a saving component, and the registered user may trigger the saving component to save the code corresponding to the functional component. Accordingly, the cloud application node 11 may, in response to the save operation, take the code corresponding to the functional component as the code to be written; generating a second access request according to the code to be written; the second access request contains the code to be written and the user identification. Further, the cloud application node 11 may determine, according to the user identifier, a target protocol port corresponding to the user identifier from the protocol ports of the proxy node 12. Further, the agent node 12 determines the target computing node according to the identifier of the target protocol port and the corresponding relationship between the protocol port of the agent node and the protocol port of the computing node. Here, the target compute node includes the core service container described above.
For the core service container, a target storage space can be applied in the storage node 15 for the code to be written according to the size of the code to be written, wherein the capacity of the target storage space is greater than or equal to the size of the code to be written. Further, the core services container may write the code to be written to the target storage space.
Optionally, the cloud application node 11 may also provide a code editing function. Correspondingly, before the code corresponding to the functional component is taken as the code to be written into the cloud application node 11, the code corresponding to the functional component may also be displayed on an editing interface, so that the registered user may view and/or modify the code corresponding to the functional component.
Before the compiling service container provides the code corresponding to the functional component to the cloud application node 11, the voice service container may also perform language annotation on the code corresponding to the functional component; and providing the code with the language annotation as a code corresponding to the functional component to the cloud application node 11. Where the target compute node includes the language service container described above. Accordingly, the cloud application node 11 displays the code corresponding to the functional component on the editing interface, that is, displays the code with the language annotation, so that the registered user can read and understand the code corresponding to the functional component conveniently.
Further, after the registered user views and/or modifies the code corresponding to the functional component, the storage component can be triggered to store the code corresponding to the functional component. Accordingly, the cloud application node 11 may, in response to the save operation, take the code corresponding to the functional component as the code to be written; generating a second access request according to the code to be written; the second access request contains the code to be written and the user identification. For a specific implementation of the cloud application node 11, the proxy node 12, and the core service container to process the second access request, reference may be made to the above related contents, and details are not described herein again.
Further, when the user wants to release the developed software, a release control on the editing interface may be triggered. The cloud application node 11 generates a publishing request in response to the triggering operation of the publishing control. The release request comprises the user identification and the identification of the software to be released. Further, the cloud application node 11 provides the publishing request to the target protocol port of the broker node 12. After monitoring that the target protocol port receives the publishing request, the proxy node 12 determines the target computing node according to the mapping relationship between the protocol port of the proxy node and the protocol port of the computing node and the identifier of the target protocol port. The target computing node includes a core service container. Further, the broker node 12 provides the publishing request to the target compute node. And the target computing node reads the code file corresponding to the software to be issued from the storage node 15 according to the identifier of the software to be issued, and pushes the code file to a Gitlab warehouse. Further, the WebIDE cloud service may access the Def platform, create a Content Delivery Network (CDN), and deliver a code file corresponding to the software to be delivered to the browser at the user side using the CDN.
Alternatively, as shown in fig. 1e, the computing nodes 13 may be connected to the Gitlab warehouse and the Def platform by a high-speed private network. The EPaaS gateway in fig. 1e refers to a gateway node deployed in the PaaS platform, and may be implemented as the proxy node 12 in fig. 1 c.
It should be noted that the control logic of the cloud application node 11 and the proxy node 12 may be implemented based on Functions as a Service (FaaS). The cloud application node 11 and the proxy node 12 adopt Function Computing (FC) as a support platform for management of the computing node 13 and the storage node 15. Since the control logic of the cloud application node 11 and the agent node 12 is developed based on the FaaS platform, only the control logic needs to be provided during development, and the code for implementing the control logic is deployed to the FaaS platform through the fun tool of the FaaS platform, so that subsequent operations such as expansion and contraction of access flow peak-valley resources, operation and maintenance and the like can be omitted. In the embodiment of the present application, the control logic mainly includes: (1) scheduling control such as creation, destruction and restarting of the ECI container; (2) the storage node 15 is managed and controlled by the memory mapping container. Alternatively, the control logic may load the storage node onto the proxy node 12 via the memory-mapped container, perform memory mapping on the proxy node 12 and the storage node 15, and unload the storage node 15 from the proxy node 12 after the access to the storage node 15 is finished.
The control logic of the cloud application node 11 and the proxy node 12 is based on a FaaS platform to complete Serverless (Serverless), so that for a user of the cloud application platform, the whole cloud application platform is realized without operation and maintenance. Meanwhile, the usability of the cloud application platform is improved by means of the FaaS platform and an ECI scaling mechanism, and the overall maintenance cost is reduced.
In addition to the cloud application system provided in the foregoing embodiment, the embodiment of the present application also provides a resource calling method, and the following provides an exemplary description of the resource calling method provided in the embodiment of the present application.
Fig. 2 is a schematic flowchart of a resource calling method according to an embodiment of the present application. As shown in fig. 2, the method includes:
201. an access request is obtained.
202. And calling the computing node to process the access request based on the mapping relation between the protocol port of the service node and the protocol port of the computing node.
In this embodiment, the communication connection between the service node and the computing node is established by the computing node mapping its protocol port back to the protocol port of the service node. The execution logic of the service node in this embodiment is a set of execution logics of the cloud application node and the proxy node in the cloud application system. For the implementation forms of the cloud application node, the proxy node, and the computing node, reference may be made to relevant contents in the foregoing cloud application system embodiment, and details are not repeated here.
In this embodiment, in the case where the service node and the compute node are deployed in different physical machines, the service node and the compute node may be located in the same network. The current serving node and the compute node may also be located in different networks. For example, the compute nodes are located in a local area network, the service nodes are located in a wide area network (public network); alternatively, the service node is located in a different local area network than the compute node. The local area network may be a Virtual Private Cloud (VPC) network.
In the embodiment of the present application, if the compute node and the service node are deployed in the same network, the service node may invoke the compute node through the private IP address of the compute node. However, if the service node and the compute node are located in different networks, the service node cannot invoke the compute node located in the other network.
To address the above issues, a service node may expose its protocol port to a compute node. The number of compute nodes is at least one. The at least one computing node may belong to the same user or may belong to different users. The user can log in the cloud application node and register the protocol port of the proxy node in advance. For a serving node, a registration request may be received, the registration request including a user identification. The user identifier may be information that uniquely identifies a registered user, for example, the user identifier may be a registered account or a user name, but is not limited thereto. Further, the service node configures a protocol port of the proxy node for the registered user corresponding to the registration request. Different protocol ports may be configured for different registered users in order to distinguish between different registered users. Accordingly, the service node may maintain a correspondence between its protocol port and the user identification. Based on the corresponding relationship, the service node can determine the protocol port corresponding to each registered user.
For protocol ports, ports 0-1023 are known ports; ports 1024 and 65535 are registration ports. The protocol ports of the service node pre-registered by the user are registered ports, namely one or more of the ports 1024 and 65535. The service node may expose a pre-registered protocol port of the user to a compute node affiliated with the user. In the case where the service node is located in a wide area network (public network) and the computing node is located in a local area network (internal network), the computing node may access the service node, but the service node may not access the computing node.
Based on the above, the computing node can map the protocol port of the computing node back to the protocol port of the proxy node, so that the proxy node can provide the data received from the protocol port to the port corresponding to the computing node. The specific implementation manner that the computing node reversely maps the protocol port of the computing node into the protocol port of the service node is as follows: the computing node may initiate a network connection request to the service node; the network connection request includes a first protocol port of the compute node and a second protocol port of the proxy node. For the network connection request, the second protocol port of the service node is a protocol port registered in advance by a user to which the computing node belongs. The first protocol port of the computing node may be a well-known port. For example, the first protocol port of the computing node may be an 80 port, and so on.
Correspondingly, the service node can establish a mapping relation between the protocol port of the service node and the protocol port of the computing node according to the first protocol port of the computing node and the second protocol port of the service node; and establishing communication connection between the service node and the computing node based on the mapping relation, thereby forming a communication tunnel between the service node and the computing node. In this way, the service node may provide data received from its second protocol port to the first protocol port of the compute node.
In the embodiment of the present application, a tunneling protocol to be followed between the service node and the compute node is not limited. Alternatively, the tunneling protocol may be, but is not limited to, a TCP protocol, an SSH protocol, or a VPN protocol, etc. For a specific implementation of establishing a communication connection between the service node and the computing node, reference may be made to the relevant contents of the cloud application system embodiment described above, and details are not described herein again.
For the service node, a mapping relationship between the second protocol port of the proxy node and the first protocol port of the compute node may be established. The service node can also monitor the second protocol port of the service node, and forward the received data to the computing node based on the mapping relation between the second protocol port of the proxy node and the first protocol port of the computing node under the condition that the data received by the second protocol port is monitored.
In the embodiment of the present application, since the computing node provides computing resources for application services provided by the service node, the protocol ports of the computing node are known ports, i.e., one or more of ports 0 to 1023. For example, the protocol port of the computing node is 80 ports. Accordingly, the compute node may reverse map its 80 port to a certain registered port of the service node.
Further, the service node may establish a mapping relationship between the protocol port of the service node and the protocol port of the compute node according to the first protocol port of the compute node and the second protocol port of the service node; and establishing communication connection between the service node and the computing node based on the mapping relation, thereby forming a communication tunnel between the service node and the computing node. In this way, the service node may provide data received from its second protocol port to the first protocol port of the compute node.
Based on the analysis, in the embodiment of the application, the service node maintains the mapping relationship between the protocol port of the service node and the protocol port of the computing node; and based on the mapping relation, calling at least one computing node as a computing resource to process the acquired access request.
In this embodiment, a communication connection is established between the service node and the computing node by the computing node mapping its protocol port back to the protocol port of the service node. Therefore, the service node can call the computing node as the computing resource of the cloud application based on the mapping relation between the protocol port of the service node and the protocol port of the computing node, the computing node and the cloud application do not need to belong to the same cloud manufacturer, the cloud application and the computing node are decoupled, and the flexibility of the cloud application is improved.
On the other hand, the computing nodes can be provided by users, the cloud application system solves the problem of network links between the cloud application nodes and computing resources (computing nodes), and does not need to care about which isolation technology is adopted between each computing node or even whether the computing nodes are isolated or not. The computing resource provided by the user may be an Elastic Cloud Server (ECS), an Elastic Container Instance (ECI), a Container or a virtual machine, and may even be a terminal device such as a Personal Computer (PC). That is to say, the computing node and the cloud application service may belong to different accounts, or even may belong to different cloud developers, so that the cloud application and the computing node are decoupled, and the flexibility of the cloud application is improved.
In this embodiment of the application, the service node may invoke at least one computing node as a computing resource of the cloud application and invoke the at least one computing node to process the access request when receiving the access request. Because the computing nodes of different registered users are isolated from each other, the computing nodes of different registered users are different. For an access request, the access request may contain a user identification of the registered user. Accordingly, the service node may obtain the access request and parse the user identifier from the access request. Further, the service node may determine, according to the user identifier and a correspondence between the protocol port of the proxy node and the user identifier, a target protocol port that is registered in advance by a registered user and corresponds to the user identifier; determining a target computing node from at least one computing node according to the mapping relation between the protocol port of the service node and the protocol port of the computing node and the identification of the target protocol port; and the calling target computing node processes the access request. Alternatively, the service node may provide the access request to the target computing node for the target computing node to process the access request. The number of the target computing nodes can be one or more. Plural means 2 or more.
The access requests are different in purpose, and the access requests are processed by the target computing node in different modes. In some embodiments, the access request requires access to a storage resource. Accordingly, the cloud application system may further include: and storing the nodes. The number of the storage nodes can be 1 or more. For the implementation form of the storage node, reference may be made to the related contents of the above system embodiments, and details are not described herein. The storage nodes may be implemented as different forms of storage nodes. For example, the storage node may be a Network Attached Storage (NAS) node or an Object Storage Service (OSS) node, or the like.
In this embodiment, when the service node calls the target computing node to process the access request, the service node may call the target computing node to load the storage node on the service node; and calling the target computing node to access the storage node according to the access request. The implementation technologies of the storage nodes are different, and the implementation manner of the target computing node that the storage node is loaded on the proxy node is different, and for the specific description, reference may be made to the relevant contents of the above system embodiment, and details are not described here again.
Optionally, the target computing node loads the storage node on the service node 12, and performs memory mapping on the service node, that is, maps the data stored in the storage node to the virtual address space of the process of the service node 1, and establishes a corresponding relationship between the virtual address space and the data stored in the storage node 15. Therefore, the service node can directly access the storage node without the assistance of a Central Processing Unit (CPU) of the service node, and the access efficiency of the service node to the storage node is improved.
In some embodiments, the access request is a write request; the write request includes data to be written. Accordingly, one implementation way for the service node to invoke the target computing node to access the storage node according to the access request is as follows: calling a target computing node to apply for a target storage space in a storage node according to the size of data to be written; and calling the target computing node to write the data to be written into the target storage space. And the capacity of the target storage space is larger than or equal to the size of the data to be written.
In other embodiments, the access request is a read request; the read request includes a target storage address of the data to be read in the storage node. Accordingly, another embodiment of the service node invoking the target computing node to access the storage node according to the access request is as follows: calling a target computing node to read data to be read from a storage space pointed by a target storage address; and outputs the data to be read.
In yet other embodiments, the access request is a query request; the query request contains a query condition. Accordingly, another embodiment of the service node in invoking the target computing node in accessing the storage node according to the access request is as follows: calling a target computing node to search target data meeting the query condition in a storage node; and outputs the target data.
The number of target computing nodes for processing the access request can be 1 or more. The computing node may be an Elastic Cloud Server (ECS), an Elastic Container Instance (ECI), a Container or a virtual machine, and may even be a terminal device such as a Personal Computer (PC). For a single machine or cluster deployment scheme such as an ECS and a personal computer, the deployment has huge machine operation and maintenance cost, and also has the capability of being unable to flexibly allocate resources and freely expand and contract capacity, which inevitably causes a certain resource waste when the access flow is low. Based on this, the computing nodes are preferably implemented as ECIs, which may implement Serverless computing resources. For a user of the cloud application, the allocation and management of the underlying machine do not need to be concerned, so that the maintenance cost of the whole platform is reduced, and the capacity of automatically expanding or reducing the capacity according to the access flow is improved. ECIs may be scaled when access traffic is low-dipping, helping to conserve computing resources.
In different application scenarios, cloud application services provided by the service nodes are different. In the following, taking the WebIDE service provided by the service node as an example, an embodiment of how the service node invokes the compute node as a compute resource of the service node is exemplarily described.
The service node may provide WebIDE services. Accordingly, the service node may provide an editing interface. For registered users of WebIDE, the registered users only need a browser to log on to the service node. Accordingly, the service node may provide an editing interface to the registered user. The editing interface comprises a functional component, and a user can develop codes by selecting the functional component.
For the service node, responding to the selection operation aiming at the functional component, calling the target computing node to compile a code corresponding to the target functional component according to the mark of the selected target functional component; and calling the target computing node to write the code corresponding to the target functional component into the storage node.
Here, the target computing node includes the above compiling container and the above core service container. Optionally, when the functional component is compiled into the code corresponding to the functional component, the target computing node (compiling container) may read, according to the identifier of the selected functional component, the code corresponding to the identifier of the functional component from the preset storage space as the code corresponding to the selected functional component. The codes corresponding to the functional components can be stored in the storage nodes, and can also be stored in other storage spaces corresponding to the cloud application service.
Optionally, before the target computing node is called to write the code corresponding to the target functional component into the storage node, the service node may further display the code corresponding to the target functional component on an editing interface, so that the registered user can view and/or modify the code corresponding to the functional component.
Optionally, the editing interface may further include a saving component, and the registered user may trigger the saving component to save the code corresponding to the functional component. Correspondingly, the service node can respond to the saving operation and take the code corresponding to the functional component as the code to be written; and calling a target computing node to write the code corresponding to the target functional component into the storage node.
And for the core service container, applying for a target storage space in the storage node for the code to be written according to the size of the code to be written, wherein the capacity of the target storage space is greater than or equal to the size of the code to be written. Further, the core services container may write the code to be written to the target storage space.
Optionally, an optional implementation manner of displaying the code corresponding to the target function component on the editing interface is as follows: calling a target computing node to perform language annotation on the code corresponding to the target functional component; and taking the code with the language annotation as a code corresponding to the target functional component, and displaying the code corresponding to the target functional component on an editing interface. Therefore, the registered user can conveniently read and understand the codes corresponding to the functional components.
It should be noted that the control logic of the Service node may be implemented based on Functions as a Service (FaaS). The service node adopts Function Computer (FC) as a supporting platform for the management of the Compute node and the storage node. Because the control logic of the service node is developed based on the FaaS platform, the control logic only needs to be provided during development, and the code for realizing the control logic is deployed to the FaaS platform through a fun tool of the FaaS platform, so that subsequent operations of accessing flow peak-valley resources, expanding and shrinking, operation and maintenance and the like can be omitted. In the embodiment of the present application, the control logic mainly includes: (1) scheduling control such as creation, destruction and restarting of the ECI container; (2) the storage node 15 is managed and controlled by the memory mapping container. Alternatively, the control logic may load the storage node onto the service node through the memory-mapped container, perform memory mapping on the service node and the storage node 15, and unload the storage node from the service node 1 after the access to the storage node is finished.
The control logic of the service node is based on the FaaS platform to complete Serverless (Serverless), so that the whole cloud application platform realizes non-operation and non-maintenance for users of the cloud application platform. Meanwhile, the usability of the cloud application platform is improved by means of the FaaS platform and an ECI scaling mechanism, and the overall maintenance cost is reduced.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subjects of steps 201 and 202 may be device a; for another example, the execution subject of step 201 may be device a, and the execution subject of step 202 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 201, 202, etc., are merely used for distinguishing different operations, and the sequence numbers do not represent any execution order per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps in the resource calling method.
Fig. 3 is a schematic structural diagram of a server device according to an embodiment of the present application. As shown in fig. 3, the server device includes: memory 40a, processor 40b and communication component 40 c. The memory 40a stores a computer program and a mapping relationship between the protocol port of the server device and the protocol port of the computing node. The communication connection between the service device and the computing node is established by the computing node reverse mapping its protocol port to the protocol port of the service device.
In the present embodiment, the processor 40b is coupled to the memory 40a and the communication component 40c for executing computer programs for: obtaining an access request through the communication component 40 c; and calling the computing node to process the access request based on the mapping relation between the protocol port of the server-side equipment and the protocol port of the computing node.
Optionally, the server device is located in a different network from the computing node.
Optionally, in the mapping relationship between the protocol port of the server device and the protocol port of the computing node, the protocol port of the server device is a registration port, and the protocol port of the computing node is a known port.
In some embodiments, the access request contains a user identification; the number of computing nodes is at least one. When the processor 40b invokes the computing node to process the access request, the processor is specifically configured to: determining a target protocol port which is registered in advance by a registered user and corresponds to the user identification according to the user identification; determining a target computing node from at least one computing node according to a mapping relation between a protocol port of the server device and a protocol port of the computing node and the identification of the target protocol port; and calling the target computing node to process the access request.
Further, when the processor 40b invokes the target computing node to process the access request, the processor is specifically configured to: calling a target computing node to load a storage node on the server-side equipment; and calling the target computing node to access the storage node according to the access request.
In some embodiments, the access request is a write request; the write request includes data to be written. When the calling target computing node accesses the storage node according to the access request, the processor 40b is specifically configured to: calling a target computing node to apply for a target storage space in a storage node according to the size of data to be written; calling a target computing node to write data to be written into a target storage space; the capacity of the target storage space is greater than or equal to the size of the data to be written.
In other embodiments, the access request is a read request; the read request includes a target storage address of the data to be read in the storage node. When the calling target computing node accesses the storage node according to the access request, the processor 40b is specifically configured to: calling a target computing node to read data to be read from a storage space pointed by a target storage address; and outputs the data to be read.
In yet other embodiments, the access request is a query request; the query request contains a query condition. When the calling target computing node accesses the storage node according to the access request, the processor 40b is specifically configured to: calling a target computing node to search target data meeting the query condition in a storage node; and outputs the target data.
In the embodiment of the application, the server device is used for providing the integrated development environment, so that a user can use the integrated development environment through a browser.
Optionally, the processor 40b is further configured to: providing an editing interface; the editing interface comprises at least one functional component; responding to the selection operation aiming at least one functional component, calling a target computing node to compile a code corresponding to the target functional component according to the mark of the selected target functional component; and calling the target computing node to write the code corresponding to the target functional component into the storage node.
Optionally, the processor 40b is further configured to: before the target computing node is called to write the codes corresponding to the target function components into the storage node, the codes corresponding to the target function components are displayed on the editing interface, so that the registered user can check and/or modify the codes corresponding to the function components.
Correspondingly, when the processor 40b calls the target computing node to write the code corresponding to the target functional component into the storage node, the processor is specifically configured to: responding to the storage operation, and taking a code corresponding to the target function component as a code to be written; and calling the target computing node to write the code corresponding to the target functional component into the storage node.
Optionally, when the editing interface displays the code corresponding to the target function component, the processor 40b is specifically configured to: calling a target computing node to perform language annotation on a code corresponding to the target functional component; and taking the code with the language annotation as a code corresponding to the target functional component, and displaying the code corresponding to the target functional component on an editing interface.
In the embodiment of the present application, the processor 40b is further configured to: receiving, by the communication component 40c, a network connection request initiated by a computing node; the network connection request comprises a first protocol port of the computing node and a second protocol port of the server device; establishing a mapping relation between the protocol port of the agent node and the protocol port of the computing node according to the first protocol port of the computing node and the second protocol port of the agent node; and establishing communication connection between the agent node and the computing node based on the mapping relation between the protocol port of the agent node and the protocol port of the computing node.
In some optional embodiments, as shown in fig. 3, the server device may further include: power supply assembly 40d, etc. Only some of the components are schematically shown in fig. 3, which does not mean that the server device must include all of the components shown in fig. 3, nor that the server device only includes the components shown in fig. 3.
In embodiments of the present application, the memory is used to store computer programs and may be configured to store other various data to support operations on the device on which it is located. Wherein the processor may execute a computer program stored in the memory to implement the corresponding control logic. The memory may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
In the embodiments of the present application, the processor may be any hardware processing device that can execute the above described method logic. Alternatively, the processor may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or a Micro Controller Unit (MCU); programmable devices such as Field-Programmable Gate arrays (FPGAs), Programmable Array Logic devices (PALs), General Array Logic devices (GAL), Complex Programmable Logic Devices (CPLDs), etc. may also be used; or Advanced Reduced Instruction Set (RISC) processors (ARM), or System On Chip (SOC), etc., but is not limited thereto.
In embodiments of the present application, the communication component is configured to facilitate wired or wireless communication between the device in which it is located and other devices. The device in which the communication component is located can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, 4G, 5G or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component may also be implemented based on Near Field Communication (NFC) technology, Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, or other technologies.
In embodiments of the present application, a power supply component is configured to provide power to various components of the device in which it is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
In this embodiment, the service-side device and the computing node establish a communication connection by mapping a protocol port of the service-side device to a protocol port of the computing node in a reverse direction. Therefore, the server-side equipment can call the computing node as the computing resource of the cloud application based on the mapping relation between the protocol port of the server-side equipment and the protocol port of the computing node, the computing node and the cloud application do not need to belong to the same cloud manufacturer, the cloud application and the computing node are decoupled, and the flexibility of the cloud application is improved.
It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (27)

1. A cloud application system, comprising: the system comprises a cloud application node, a proxy node and at least one computing node; the cloud application node is used for providing application services; the agent node and the at least one computing node establish communication connection by the at least one computing node reverse mapping its protocol port to the protocol port of the agent node;
the proxy node maintains the mapping relation between the protocol port of the proxy node and the protocol port of the computing node;
the proxy node is configured to invoke the at least one computing node as a computing resource of the cloud application node based on a mapping relationship between a protocol port of the proxy node and a protocol port of a computing node.
2. The system of claim 1, the cloud application node being located in a different network than the at least one computing node; the cloud application node and the agent node are located in the same network.
3. The system of claim 2, the cloud application node and the proxy node being located in a wide area network environment, the at least one computing node being located in a local area network environment.
4. The system of claim 1, the protocol port of the proxy node being a registration port and the protocol port of the at least one compute node being a known port.
5. The system of claim 1, the cloud application node to obtain an access request; the access request comprises a user identification; determining a target protocol port which is registered in advance by a registered user corresponding to the user identification according to the user identification; providing the access request to a target protocol port of the proxy node;
the proxy node is specifically configured to: determining a target computing node from the at least one computing node according to the mapping relation between the protocol port of the agent node and the protocol port of the computing node and the identification of the target protocol port; providing the access request to the target computing node for the target computing node to process the access request.
6. The system of claim 5, further comprising: a storage node; when the target computing node processes the access request, the target computing node is specifically configured to:
mounting the storage node on the proxy node;
and accessing the storage node according to the access request.
7. The system of claim 6, the storage node being a network attached storage node.
8. The system of claim 6, the cloud application node to provide an integrated development environment for use by a user through a browser.
9. The system of any of claims 1-8, the cloud application node is implemented based on a function-as-a-service platform.
10. The system of any of claims 1-8, the at least one compute node being a container instance.
11. The system of any of claims 1-8, the at least one computing node further to initiate a network connection request to the proxy node; the network connection request comprises a first protocol port of the at least one compute node and a second protocol port of the proxy node;
the proxy node is further configured to: establishing a mapping relation between the protocol port of the agent node and the protocol port of the computing node according to the first protocol port of the at least one computing node and the second protocol port of the agent node; and establishing a communication connection between the agent node and the at least one computing node based on the mapping relation between the protocol port of the agent node and the protocol port of the computing node.
12. A resource invocation method applied to a service node, wherein a communication connection between the service node and a computing node is established by the computing node by reverse mapping a protocol port of the computing node to a protocol port of the service node, and the method comprises the following steps:
acquiring an access request; and calling the computing node to process the access request based on the mapping relation between the protocol port of the service node and the protocol port of the computing node.
13. The method of claim 12, the serving node being located in a different network than the compute node.
14. The method of claim 12, wherein in the mapping relationship between the protocol port of the service node and the protocol port of the computing node, the protocol port of the service node is a registration port, and the protocol port of the computing node is a known port.
15. The method of claim 12, the access request including a user identification; the number of the computing nodes is at least one; the invoking the computing node to process the access request based on the mapping relationship between the protocol port of the service node and the protocol port of the computing node comprises:
determining a target protocol port which is registered in advance by a registered user corresponding to the user identification according to the user identification;
determining a target computing node from the at least one computing node according to the mapping relation between the protocol port of the service node and the protocol port of the computing node and the identification of the target protocol port;
and calling the target computing node to process the access request.
16. The method of claim 15, the invoking the target computing node to process the access request comprising:
calling the target computing node to mount a storage node on the service node;
and calling the target computing node to access the storage node according to the access request.
17. The method of claim 16, the access request being a write request; the write request comprises data to be written; the invoking the target computing node to access the storage node according to the access request includes:
calling the target computing node to apply for a target storage space in the storage node according to the size of the data to be written; and the number of the first and second groups,
calling the target computing node to write the data to be written into the target storage space; the capacity of the target storage space is larger than or equal to the size of the data to be written.
18. The method of claim 16, the access request being a read request; the read request comprises a target storage address of data to be read in the storage node; the invoking the target computing node to access the storage node according to the access request includes:
calling the target computing node to read the data to be read from the memory space pointed by the target memory address; and outputting the data to be read.
19. The method of claim 16, the access request being a query request; the query request contains a query condition; the invoking the target computing node to access the storage node according to the access request includes:
calling the target computing node to search target data meeting the query condition in the storage node; and outputs the target data.
20. The method of claim 16, the service node for providing an integrated development environment for use by a user through a browser.
21. The method of claim 20, further comprising:
providing an editing interface; the editing interface comprises at least one functional component;
responding to the selection operation aiming at the at least one functional component, calling the target computing node to compile codes corresponding to the target functional components according to the identification of the selected target functional component;
and calling the target computing node to write the code corresponding to the target functional component into the storage node.
22. The method of claim 21, prior to invoking the target compute node to write code corresponding to the target functional component to the storage node, further comprising:
and displaying the code corresponding to the target function component on the editing interface so that the registered user can view and/or modify the code corresponding to the function component.
23. The method of claim 22, the invoking the target compute node to write code corresponding to the target functional component to the storage node, comprising:
responding to a saving operation, and taking a code corresponding to the target function component as the code to be written;
and calling the target computing node to write the code corresponding to the target functional component into the storage node.
24. The method of claim 22, wherein the displaying the code corresponding to the target function component on the editing interface comprises:
calling the target computing node to perform language annotation on the code corresponding to the target functional component;
and taking the code with the language annotation as the code corresponding to the target function component, and displaying the code corresponding to the target function component on the editing interface.
25. The method according to any of claims 12-24, further comprising, before invoking the computing node to process the access request based on a mapping between a protocol port of the service node and a protocol port of a computing node:
receiving a network connection request initiated by the computing node; the network connection request comprises the first protocol port of the computing node and the second protocol port of the service node;
establishing a mapping relation between the protocol port of the agent node and the protocol port of the computing node according to the first protocol port of the computing node and the second protocol port of the agent node;
and establishing communication connection between the agent node and the computing node based on the mapping relation between the protocol port of the agent node and the protocol port of the computing node.
26. A server device, the server device comprising: a memory, a processor, and a communications component; wherein the memory is used for storing a computer program;
the processor is coupled to the memory and the communication component for executing the computer program for performing the steps of the method of any one of claims 12-25;
the communication connection between the server device and the computing node is established by the computing node mapping its protocol port back to the protocol port of the server device.
27. A computer-readable storage medium storing computer instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the method of any one of claims 12-25.
CN202010526842.6A 2020-06-09 2020-06-09 Resource calling method, device, system and storage medium Active CN113301080B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010526842.6A CN113301080B (en) 2020-06-09 2020-06-09 Resource calling method, device, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010526842.6A CN113301080B (en) 2020-06-09 2020-06-09 Resource calling method, device, system and storage medium

Publications (2)

Publication Number Publication Date
CN113301080A true CN113301080A (en) 2021-08-24
CN113301080B CN113301080B (en) 2022-08-02

Family

ID=77318658

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010526842.6A Active CN113301080B (en) 2020-06-09 2020-06-09 Resource calling method, device, system and storage medium

Country Status (1)

Country Link
CN (1) CN113301080B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114024743A (en) * 2021-11-04 2022-02-08 山东中创软件商用中间件股份有限公司 Remote management method, device, equipment and storage medium for application server
CN114584606A (en) * 2022-04-29 2022-06-03 阿里云计算有限公司 End cloud communication method and equipment
CN114826864A (en) * 2022-03-11 2022-07-29 阿里巴巴(中国)有限公司 Architecture determination method and apparatus for application system, electronic device, and computer-readable storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102291467A (en) * 2011-09-15 2011-12-21 电子科技大学 Communication platform and method suitable for private cloud environment
US20140185621A1 (en) * 2013-01-03 2014-07-03 International Business Machines Corporation Energy management for communication network elements
CN104753930A (en) * 2015-03-17 2015-07-01 成都盛思睿信息技术有限公司 Cloud desktop management system based on security gateway and security access control method thereof
US20160143076A1 (en) * 2013-07-15 2016-05-19 Alcatel Lucent Proxy node and method
US20170118249A1 (en) * 2015-10-23 2017-04-27 Oracle International Corporation Managing security agents in a distributed environment
CN107426034A (en) * 2017-08-18 2017-12-01 国网山东省电力公司信息通信公司 A kind of extensive container scheduling system and method based on cloud platform
CN108139936A (en) * 2015-08-06 2018-06-08 瑞典爱立信有限公司 The methods, devices and systems of access to the serial port from the virtual machine in the virtual application of deployment are provided
CN108370379A (en) * 2015-12-14 2018-08-03 亚马逊技术有限公司 With cunicular equipment management
CN109314724A (en) * 2016-08-09 2019-02-05 华为技术有限公司 The methods, devices and systems of virtual machine access physical server in cloud computing system
CN111107126A (en) * 2018-10-26 2020-05-05 慧与发展有限责任合伙企业 Replication of encrypted volumes

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102291467A (en) * 2011-09-15 2011-12-21 电子科技大学 Communication platform and method suitable for private cloud environment
US20140185621A1 (en) * 2013-01-03 2014-07-03 International Business Machines Corporation Energy management for communication network elements
US20160143076A1 (en) * 2013-07-15 2016-05-19 Alcatel Lucent Proxy node and method
CN104753930A (en) * 2015-03-17 2015-07-01 成都盛思睿信息技术有限公司 Cloud desktop management system based on security gateway and security access control method thereof
CN108139936A (en) * 2015-08-06 2018-06-08 瑞典爱立信有限公司 The methods, devices and systems of access to the serial port from the virtual machine in the virtual application of deployment are provided
US20170118249A1 (en) * 2015-10-23 2017-04-27 Oracle International Corporation Managing security agents in a distributed environment
CN108370379A (en) * 2015-12-14 2018-08-03 亚马逊技术有限公司 With cunicular equipment management
CN109314724A (en) * 2016-08-09 2019-02-05 华为技术有限公司 The methods, devices and systems of virtual machine access physical server in cloud computing system
CN107426034A (en) * 2017-08-18 2017-12-01 国网山东省电力公司信息通信公司 A kind of extensive container scheduling system and method based on cloud platform
CN111107126A (en) * 2018-10-26 2020-05-05 慧与发展有限责任合伙企业 Replication of encrypted volumes

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114024743A (en) * 2021-11-04 2022-02-08 山东中创软件商用中间件股份有限公司 Remote management method, device, equipment and storage medium for application server
CN114826864A (en) * 2022-03-11 2022-07-29 阿里巴巴(中国)有限公司 Architecture determination method and apparatus for application system, electronic device, and computer-readable storage medium
CN114584606A (en) * 2022-04-29 2022-06-03 阿里云计算有限公司 End cloud communication method and equipment

Also Published As

Publication number Publication date
CN113301080B (en) 2022-08-02

Similar Documents

Publication Publication Date Title
CN109739604B (en) Page rendering method, device, server and storage medium
US20190102201A1 (en) Component invoking method and apparatus, and component data processing method and apparatus
CN113301080B (en) Resource calling method, device, system and storage medium
CN101853152B (en) Method and system for generating graphical user interface
US11546431B2 (en) Efficient and extensive function groups with multi-instance function support for cloud based processing
CN112565317B (en) Hybrid cloud system, data processing method and device thereof, and storage medium
US11582285B2 (en) Asynchronous workflow and task api for cloud based processing
US10268779B2 (en) Sharing server conversational context between multiple cognitive engines
CN113448690B (en) Monitoring method and device
CN114996134A (en) Containerized deployment method, electronic equipment and storage medium
US9164817B2 (en) Mobile communication terminal to provide widget expansion function using message communication, and operation method of the mobile communication terminal
CN109343970B (en) Application program-based operation method and device, electronic equipment and computer medium
US20230164210A1 (en) Asynchronous workflow and task api for cloud based processing
CN112650662A (en) Test environment deployment method and device
US9916280B2 (en) Virtualizing TCP/IP services with shared memory transport
CN108353017B (en) Computing system and method for operating multiple gateways on a multi-gateway virtual machine
CN114365467B (en) Methods, apparatuses, and computer readable media for determining 3GPP FLUS reception capability
CN107566519A (en) A kind of code operation method, apparatus, server and server cluster
CN113434244B (en) Instance creating method, instance creating apparatus, data processing method, data processing system, and storage medium
CN114189457A (en) Cloud resource display and processing method, equipment and storage medium
CN110990146A (en) Load balancing method, device, system and storage medium
CN116056240B (en) Resource allocation system, method and equipment
CN117724726B (en) Data processing method and related device
CN115514611A (en) Message processing method, device, equipment and storage medium
CN118101744A (en) Method and device for operating resource object and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant