CN114205414B - Data processing method, device, electronic equipment and medium based on service grid - Google Patents

Data processing method, device, electronic equipment and medium based on service grid Download PDF

Info

Publication number
CN114205414B
CN114205414B CN202111511550.6A CN202111511550A CN114205414B CN 114205414 B CN114205414 B CN 114205414B CN 202111511550 A CN202111511550 A CN 202111511550A CN 114205414 B CN114205414 B CN 114205414B
Authority
CN
China
Prior art keywords
service
service application
node
access request
data access
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111511550.6A
Other languages
Chinese (zh)
Other versions
CN114205414A (en
Inventor
刘正峰
杨振宇
罗晓鸣
潘丽娜
郑智斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111511550.6A priority Critical patent/CN114205414B/en
Publication of CN114205414A publication Critical patent/CN114205414A/en
Application granted granted Critical
Publication of CN114205414B publication Critical patent/CN114205414B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)

Abstract

The disclosure provides a data processing method, device, electronic equipment and medium based on a service grid, relates to the technical field of computer cloud service, and particularly relates to the service grid technology. The implementation scheme is as follows: acquiring a data access request of a first service application, wherein the data access request is directed to a first node in a second service application comprising a plurality of nodes; obtaining address information of the first node from a multi-fragment service discovery proxy server for the second service application; forwarding a data access request of the first service application to the first node of the second service application based on the address information.

Description

Data processing method, device, electronic equipment and medium based on service grid
Technical Field
The present disclosure relates to the field of computer cloud service technologies, and in particular, to a service grid technology, and in particular, to a data processing method, apparatus, electronic device, computer readable storage medium, and computer program product based on a service grid.
Background
With the widespread acceptance and use of micro-service architecture, the micro-service degree of the current online service is continuously improved, and the number of online service modules is increased in a burst manner.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, the problems mentioned in this section should not be considered as having been recognized in any prior art unless otherwise indicated.
Disclosure of Invention
The present disclosure provides a data processing method, apparatus, electronic device, computer readable storage medium and computer program product based on a service grid.
According to an aspect of the present disclosure, there is provided a service grid-based data processing method, including: acquiring a data access request of a first service application, wherein the data access request is directed to a first node in a second service application comprising a plurality of nodes; obtaining address information of the first node from a multi-fragment service discovery proxy server for the second service application; forwarding a data access request of the first service application to the first node of the second service application based on the address information.
According to another aspect of the present disclosure, there is provided a service grid-based data processing apparatus, comprising: an access request acquisition unit configured to acquire a data access request of a first service application, wherein the data access request is for a first node in a second service application including a plurality of nodes; a node information acquisition unit configured to acquire address information of the first node from a multi-fragment service discovery proxy server for the second service application; and a forwarding unit configured to forward a data access request of the first service application to the first node of the second service application based on the address information.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method as described above.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements a method as described above
According to one or more embodiments of the present disclosure, the service grid technology can be conveniently applied to a multi-sliced service application, so that a unified communication service capability can be provided for an online service system including the multi-sliced service application.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings illustrate exemplary embodiments and, together with the description, serve to explain exemplary implementations of the embodiments. The illustrated embodiments are for exemplary purposes only and do not limit the scope of the claims. Throughout the drawings, identical reference numerals designate similar, but not necessarily identical, elements.
FIG. 1 schematically illustrates a schematic block diagram of a network system based on a service grid architecture according to one embodiment of the present application;
FIG. 2 illustrates an exemplary flow chart of a data processing method according to an embodiment of the present disclosure;
FIG. 3 illustrates an example of a service grid based system architecture according to the present disclosure;
FIG. 4 illustrates an exemplary process of data communication according to an embodiment of the present disclosure;
FIG. 5 illustrates an exemplary block diagram of a data processing apparatus according to an embodiment of the present disclosure
Fig. 6 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 schematically shows a schematic block diagram of a network system based on a service grid architecture according to one embodiment of the application.
As shown in fig. 1, the system 100 may include micro service nodes 101 and 102, although only two are shown in fig. 1, it is understood that more micro service nodes may be included in the system 100. Wherein the micro service node may be used to implement an independent business logic in a complex service system. Different micro-service nodes may be deployed on the same computing node or on different computing nodes, respectively. Thousands of micro service nodes may be included in a complex network system. At this scale, there is a complex connection relationship between the individual microservice nodes. Due to the different language types and communication frame types, the data communication between different micro service nodes is difficult. In order to improve the overall management efficiency of the micro service network system, the related art improves the service efficiency of the network system by introducing a service grid (SERVICE MESH).
As shown in fig. 1, the service grid may be used to configure proxy nodes 103 associated with micro service nodes 101, 104 associated with micro service nodes 102. In a grid architecture, such agent nodes are also referred to as sidecars (Sidecar). Traffic sent by the micro service node 101 to the micro service node 102 can be intercepted by a proxy node deployed synchronously with the micro service node, and the traffic of the micro service node 101 is forwarded to the micro service node 102 by the service grid, so as to realize data communication between the micro service node 101 and the micro service node 102.
Further, as shown in fig. 1, the system 100 may further include a service grid control center 105, where the service grid control center 105 may send control instructions based on link information of each micro service node in the network for implementing various advanced network architecture capabilities for data communication of the service grid, such as dynamic blowing, dynamic timeout, architecture degradation, loss-stopping operation, and so on.
The service grid employed in the related art currently fails to provide a solution for a multi-fragment service. Multi-sharding is a common distributed architecture, mainly applied to the following scenarios: (1) When the service depends on a larger data set and a single instance cannot provide the service, the data set can be split in a multi-fragment mode, so that the service has horizontal expansion capability; (2) When a service has a plurality of smaller data sets, each data set independently provides service for a specific service, each data set can be used as a slice in a multi-slice mode to jointly use the resources of one application module. The service grid cannot dynamically obtain the fragment information in the service when facing the scene, and therefore cannot process the link relation of the service. However, the multiple-fragment service is an indispensable part of a complex system, and the failure to process the service will result in that the service grid cannot complete full service coverage. Further, since the service grid cannot cover the multi-fragment service, the service module depending on the multi-fragment service still needs to explicitly perform inter-frame communication coding, and cannot unify the service communication standard, thereby increasing the development workload. In addition, the multi-fragment service cannot multiplex the advanced architecture capability provided by the service grid, such as dynamic fusing, dynamic timeout and the like, the architecture degradation and loss stopping operation needs to independently design a plan for the multi-fragment service, and the operation and maintenance capability can not be generalized and standardized. In addition, the service grid can not cover the multi-slice service, so that blind spots appear in service observability, and the flow of the multi-slice service becomes a black hole observed by the service grid, so that service loss appears in a service whole module call chain and a flow view.
In order to solve the above-mentioned problems, the present disclosure provides a new data processing method based on a service network architecture.
Fig. 2 shows an exemplary flowchart of a data processing method according to an embodiment of the present disclosure. The method shown in fig. 2 may be implemented using a proxy node of the service grid shown in fig. 1.
In step S202, a data access request of a first service application may be obtained, wherein the data access request is for a first node in a second service application comprising a plurality of nodes.
In step S204, address information of the first node may be acquired from a multi-fragment service discovery proxy server for the second service application.
In step S206, the data access request of the first service application may be forwarded to the first node of the second service application based on the address information.
By using the method provided by the embodiment of the disclosure, the service grid architecture can be applied to the multi-fragment service, so that the multi-fragment service node can be brought into the coverage area of the service grid, and the service grid architecture can cover more services in the network communication system.
The principles of the present disclosure will be described below in connection with specific examples.
In step S202, a data access request of a first service application may be obtained, wherein the data access request is for a first node in a second service application comprising a plurality of nodes.
Wherein the second service application may be a multi-fragmented service application. Thus, the second service application may comprise a plurality of different nodes, wherein each node may store and process a different set of data. Different nodes in the second service application may be stored on the same host or on different hosts, respectively. Further, in response to the operational state of the second service application, the state of the different nodes may change, resulting in a possible change in availability and address information of the nodes.
When a first service application wishes to initiate data access for a second service application, the first service application may first send a service discovery request to the second service application. Under the architecture of the service grid, a service discovery request of a first service application may be intercepted using a data plane of the service grid. For example, the data plane of the service grid may intercept a service discovery request of the first service application through hack traffic modules. Wherein the service discovery request sent by the first service application may indicate a destination of the data access request, such as the first node of the second service application. In response to intercepting the service discovery request, the service mesh data plane may allocate a service address for the first service application for the data access request of the first node and send the allocated service address to the first service application. In this way, data access requests sent by the first service application destined to the first node of the second service application will all be sent to the service address. In a subsequent data communication, the data plane of the service grid will receive a data access request for the first node from the assigned service address by the first service application. Thus, for the first service application, the service address may be considered as a virtual address of the first node (as distinguished from a real address of the first node). In this way, the service grid is invisible to the first service application, which operates as if it were in data communication directly with the second service application.
In step S204, address information of the first node may be acquired from a multi-fragment service discovery proxy server for the second service application.
The multi-fragment service discovery proxy server may have stored therein address information for a plurality of nodes in the second service application. For example, address information of some or all of the nodes including the first node in the second service application may be stored in the multi-fragment service discovery proxy server. In some examples, the address information may include an Internet Protocol (IP) address and port information of the node, such as an IP address and port information of the first node. It will be appreciated that when there are a plurality of multi-slice services in the service network, the address information of the nodes of each multi-slice service may be stored in the multi-slice service discovery proxy server, respectively.
It will be appreciated that for distributed multi-slice services, situations may arise in which a node is temporarily unavailable or fails. Thus, the address information of each node may change over time. In the related art, it is this instability that causes the service grid to fail to dynamically acquire the fragment information in the multi-fragment service, and thus fail to process the link relationship between the multi-fragment service and other services in the service network. In order to solve the above-described problems, the present disclosure provides a multi-fragment service discovery proxy server. By storing the fragment information of the multi-fragment service (for example, address information of different nodes in the multi-fragment service) in the multi-fragment service discovery proxy server, the data plane of the service grid can quickly and accurately acquire the fragment information of the multi-fragment service, so that the data communication between different services in the service system can be realized by utilizing the architecture of the service grid.
In some embodiments, address information of a plurality of nodes of the second service application stored in the multi-fragment service discovery proxy server is updated at a predetermined time frequency. For example, address information stored in the multi-fragment service discovery proxy server may be updated at predetermined time intervals (e.g., one day, one week). If the fragmentation information of the multi-fragment service changes during this period, the changed information may be updated in the multi-fragment service discovery proxy server. In other embodiments, address information of a plurality of nodes of the second service application stored in the multi-fragment service discovery proxy server is updated in response to a change in address information of at least one of the plurality of nodes.
In step S206, the data access request of the first service application may be forwarded to the first node of the second service application based on the address information.
Based on the address information obtained from the multi-fragment service discovery proxy server, the data plane of the service grid is capable of forwarding the data access request of the first service application to the address of the first node, thereby enabling data communication between the first service application and the second service application.
In some embodiments, the method 200 may further comprise: a response to the data access request is obtained from the first node and forwarded to the first service application.
It will be appreciated that the data access request sent by the first service application may be of various types, such as reading, writing of data, etc. The data access request may contain specific data processing information. In response to the data access request of the first service application, the first node of the second service application may generate a corresponding response as a data access result of the data access request of the first service application. In some embodiments, in response to receiving a data access request sent by the data plane of the service grid, the first node may generate and send a corresponding response back to the data plane of the service grid, and forward the received response by the data plane of the service grid to the first service application.
Further, as shown in FIG. 1, a control center may be included in the architecture of the service grid. The control center may send instructions to the truck assemblies in the service grid for various policies for scheduling data communications. In the case where the service grid architecture is applied to the multi-slice service using the method shown in fig. 2, a data communication policy related to the multi-slice service may be issued using a control center of the service grid.
In some embodiments, the method 200 may further comprise: scheduling instructions are received from a service grid control center. Wherein the scheduling instructions are based on link information obtained from the first service application and the second service application. The scheduling instructions are for scheduling data communications between the first service application and the second service application.
The link information may include information such as the number of nodes of the service application of the first service application and/or the second service application, network timeout information, and the like. It will be appreciated that examples of link information are not limited thereto, and that the service grid control center may obtain various information regarding the data transmission status of the service application according to actual circumstances to assist the control center in generating various policies for scheduling data communications. Wherein the scheduling instructions may include at least one of dynamic fusing instructions, dynamic timeout instructions, architecture degradation instructions, loss-stopping operation instructions to improve efficiency and stability of data communications in the network. It is understood that the control center may also generate other scheduling strategies according to the actual situation.
Fig. 3 illustrates an example of a service grid based system architecture according to the present disclosure.
As shown in fig. 3, system 300 may include a service grid data plane 310, a first service application 320, a multi-fragment service discovery proxy server 330, and a second service application 340. Wherein the second service application 340 is a multi-slice service application, wherein the second service application 340 may comprise a first node 341 and a second node 342.
In the system 300 shown in fig. 3, the service grid data plane may be configured to perform the data processing method shown in fig. 2. The data plane 310 may obtain a data access request from the first service application 320 for a first node 341 in the second service application. In response to the address information of the first node obtained from the multi-slice service discovery proxy server 330, the data plane 310 may forward the data access request to the first node 341, and may further forward the corresponding forwarding of the data access request generated by the first node 341 back to the first service application 320, thereby enabling data communication between the first service application and the second service application using the service grid architecture.
Further, the system 300 shown in fig. 3 may also include a service grid control center 350. Wherein the control center 350 may collect link information of the first service application and the second service application and generate scheduling instructions for data communication between the first service application and the second service application based on the link information.
Fig. 4 illustrates an exemplary process of data communication according to an embodiment of the present disclosure.
As shown in fig. 4, in step 401, a first service application 420 receives a data access request from a user.
In step 402, a first service application 420 generates and initiates a service discovery request for a first node of a second service application.
In step 403, the service grid data plane 430 intercepts the service discovery request, assigns a corresponding service address to the first service application, and sends the service address to the first service application 420. In the following data communication, data access requests of the first service application to the first node of the second service application will be sent to the assigned service address.
In step 404, the service grid data plane 430 receives a data access request sent by the first service application 420.
In step 405, the service grid data plane 430 accesses address information stored in the multi-fragment service discovery proxy server 440.
In step 406, the multi-fragment service discovery proxy 440 transmits address information of the first node to the data plane 430.
In step 407, the data plane 430 transmits the data access request received from the first service application 420 to the first node 450 based on the address information acquired in step 406.
In step 408, the first node 450 sends a response to the data access request to the data plane 420.
In step 409, the data plane 420 forwards the response information to the first service application 420.
In step 410, the first service application 420 may provide a data response to the user.
With the exemplary process of data communication shown in fig. 4, the service grid architecture can be conveniently extended to cover data communication of multi-slice services, so that unified communication standards can be provided for access of other service modules to the multi-slice services, development difficulty is reduced, and generalized and standardized architecture capability and operation and maintenance capability can be provided for the multi-slice services.
Fig. 5 shows an exemplary block diagram of a data processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 5, the data processing apparatus 500 may include an access request acquisition unit 510, a node information acquisition unit 520, and a forwarding unit 530.
The access request acquisition unit 510 may be configured to acquire a data access request of a first service application, wherein the data access request is for a first node in a second service application comprising a plurality of nodes. The node information acquiring unit 520 may be configured to acquire address information of the first node from a multi-fragment service discovery proxy server for the second service application. The forwarding unit 530 may be configured to forward the data access request of the first service application to the first node of the second service application based on the address information.
In some embodiments, the access request acquisition unit may be configured to: intercepting a service discovery request of a first service application, wherein the service discovery request indicates a first node for a data access request; in response to the service discovery request, assigning a service address to a data access request of the first service application for the first node; transmitting the service address to the first service application; and receiving a data access request from the service address.
In some embodiments, the multi-fragment service discovery proxy server stores address information for a plurality of nodes in the second service application. The address information includes the internet protocol address and port of the node.
In some embodiments, address information of a plurality of nodes of the second service application stored in the multi-fragment service discovery proxy server is updated at a predetermined time frequency or in response to a change in address information of at least one of the plurality of nodes.
In some embodiments, the data processing apparatus 500 may further comprise a response unit, which may be configured to obtain a response to the data access request from the first node; the response is forwarded to the first service application.
In some embodiments, the data processing apparatus 500 may further include a scheduling instruction receiving unit, which may be configured to receive scheduling instructions from the service grid control center, wherein the scheduling instructions are based on link information acquired from the first service application and the second service application. Wherein the scheduling instructions are for scheduling data communications between the first service application and the second service application.
In some embodiments, the link information includes the number of nodes of the service application, network timeout information.
In some embodiments, the scheduling instructions include at least one of a dynamic fuse instruction, a dynamic timeout instruction, an architecture degradation instruction, a loss-stopping operation instruction.
Steps S202 to S206 shown in fig. 2 may be performed by using units 510 to 530 shown in fig. 5, and will not be described again.
There is also provided, in accordance with one or more embodiments of the present disclosure, an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described above.
There is also provided, in accordance with one or more embodiments of the present disclosure, a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the method as described above.
According to one or more embodiments of the present disclosure, there is also provided a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the method as described above.
Referring to fig. 6, a block diagram of an electronic device 600 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the electronic device 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data required for the operation of the electronic device 600 can also be stored. The computing unit 601, ROM 602, and RAM603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Various components in the device 600 are connected to the I/O interface 605, including: an input unit 606, an output unit 607, a storage unit 608, and a communication unit 609. The input unit 606 may be any type of device capable of inputting information to the device 600, the input unit 606 may receive input numeric or character information and generate key signal inputs related to user settings and/or function control of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a trackpad, a trackball, a joystick, a microphone, and/or a remote control. The output unit 607 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. Storage unit 608 may include, but is not limited to, magnetic disks, optical disks. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 802.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 601 performs the various methods and processes described above, such as method 200. For example, in some embodiments, the method 200 may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. One or more of the steps of the method 200 described above may be performed when a computer program is loaded into RAM 603 and executed by the computing unit 601. Alternatively, in other embodiments, computing unit 601 may be configured to perform method 200 by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely exemplary embodiments or examples, and that the scope of the present invention is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.

Claims (17)

1. A data processing method based on a service grid, comprising:
Acquiring a data access request of a first service application, wherein the data access request is directed to a first node in a second service application comprising a plurality of nodes;
Obtaining address information of the first node from a multi-fragment service discovery proxy server for the second service application;
Forwarding a data access request of the first service application to the first node of the second service application based on the address information,
Wherein the multi-slice service discovery proxy server stores address information of a plurality of nodes in the second service application, wherein the address information of the plurality of nodes of the second service application stored in the multi-slice service discovery proxy server is updated at a predetermined time frequency or in response to a change in address information of at least one of the plurality of nodes.
2. The data processing method of claim 1, wherein obtaining the data access request of the first service application comprises:
Intercepting a service discovery request of the first service application, wherein the service discovery request indicates the first node for the data access request;
assigning a service address to the first service application for a data access request of the first node in response to the service discovery request;
Transmitting the service address to the first service application;
the data access request is received from the service address.
3. The data processing method of claim 1, wherein the address information includes an internet protocol address and a port of a node.
4. The data processing method of claim 1, further comprising:
Obtaining a response to the data access request from the first node;
the response is forwarded to the first service application.
5. The data processing method of any one of claims 1-4, further comprising:
Receiving scheduling instructions from a service grid control center, wherein the scheduling instructions are based on link information obtained from the first service application and the second service application;
Wherein the scheduling instructions are for scheduling data communications between the first service application and the second service application.
6. The data processing method of claim 5, wherein the link information includes a number of nodes of a service application, network timeout information.
7. The data processing method of claim 5, wherein the scheduling instruction comprises at least one of a dynamic fuse instruction, a dynamic timeout instruction, an architecture degradation instruction, a loss-stopping operation instruction.
8. A service grid based data processing apparatus comprising:
An access request acquisition unit configured to acquire a data access request of a first service application, wherein the data access request is for a first node in a second service application including a plurality of nodes;
A node information acquisition unit configured to acquire address information of the first node from a multi-fragment service discovery proxy server for the second service application;
a forwarding unit configured to forward a data access request of the first service application to the first node of the second service application based on the address information,
Wherein the multi-slice service discovery proxy server stores address information of a plurality of nodes in the second service application, wherein the address information of the plurality of nodes of the second service application stored in the multi-slice service discovery proxy server is updated at a predetermined time frequency or in response to a change in address information of at least one of the plurality of nodes.
9. The data processing apparatus according to claim 8, wherein the access request acquisition unit is configured to:
Intercepting a service discovery request of the first service application, wherein the service discovery request indicates the first node for the data access request;
assigning a service address to the first service application for a data access request of the first node in response to the service discovery request;
Transmitting the service address to the first service application;
the data access request is received from the service address.
10. The data processing apparatus of claim 8, wherein the address information comprises an internet protocol address and a port of a node.
11. The data processing apparatus of claim 8, further comprising:
a response unit configured to:
Obtaining a response to the data access request from the first node;
the response is forwarded to the first service application.
12. The data processing apparatus according to any one of claims 8-11, further comprising:
a scheduling instruction receiving unit configured to receive a scheduling instruction from a service grid control center, wherein the scheduling instruction is based on link information acquired from the first service application and the second service application;
Wherein the scheduling instructions are for scheduling data communications between the first service application and the second service application.
13. The data processing apparatus of claim 12, wherein the link information includes a number of nodes of a service application, network timeout information.
14. The data processing apparatus of claim 12, wherein the scheduling instruction comprises at least one of a dynamic fuse instruction, a dynamic timeout instruction, an architecture degradation instruction, a loss-stopping operation instruction.
15. An electronic device, comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein the method comprises the steps of
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-7.
17. A computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-7.
CN202111511550.6A 2021-12-06 2021-12-06 Data processing method, device, electronic equipment and medium based on service grid Active CN114205414B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111511550.6A CN114205414B (en) 2021-12-06 2021-12-06 Data processing method, device, electronic equipment and medium based on service grid

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111511550.6A CN114205414B (en) 2021-12-06 2021-12-06 Data processing method, device, electronic equipment and medium based on service grid

Publications (2)

Publication Number Publication Date
CN114205414A CN114205414A (en) 2022-03-18
CN114205414B true CN114205414B (en) 2024-07-26

Family

ID=80652508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111511550.6A Active CN114205414B (en) 2021-12-06 2021-12-06 Data processing method, device, electronic equipment and medium based on service grid

Country Status (1)

Country Link
CN (1) CN114205414B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115334153B (en) * 2022-08-12 2023-10-27 北京百度网讯科技有限公司 Data processing method and device for service grid

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112333289A (en) * 2021-01-05 2021-02-05 清华四川能源互联网研究院 Reverse proxy access method, device, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200136921A1 (en) * 2019-09-28 2020-04-30 Intel Corporation Methods, system, articles of manufacture, and apparatus to manage telemetry data in an edge environment
CN112000365B (en) * 2020-08-24 2024-05-17 百度时代网络技术(北京)有限公司 Service grid configuration method, device, equipment and medium based on micro-service architecture

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112333289A (en) * 2021-01-05 2021-02-05 清华四川能源互联网研究院 Reverse proxy access method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114205414A (en) 2022-03-18

Similar Documents

Publication Publication Date Title
CN110557791B (en) Session management method, device and system
CN113742031B (en) Node state information acquisition method and device, electronic equipment and readable storage medium
CN110912835B (en) Service distribution method, device and system
CN111258627B (en) Interface document generation method and device
US20240313851A1 (en) Method and device for determining satellite link information
CN112771928B (en) Method and device for determining satellite return information
CN110995504B (en) Micro-service node exception handling method, device and system
CN115225634B (en) Data forwarding method, device and computer program product under virtual network
CN112134866A (en) Service access control method, device, system and computer readable storage medium
CN113849361A (en) Method, device, equipment and storage medium for testing service node
CN111770176B (en) Traffic scheduling method and device
CN114205414B (en) Data processing method, device, electronic equipment and medium based on service grid
CN115633037A (en) Method, device, virtual gateway equipment, medium and system for forwarding data packet
CN114172753B (en) Address reservation method, network equipment and system
CN113612643A (en) Network configuration method, device, equipment and storage medium of cloud mobile phone
WO2022228121A1 (en) Service providing method and apparatus
CN115767786A (en) Multi-cluster communication method and device, electronic equipment and storage medium
CN113726881B (en) Communication connection establishment method, related device and computer readable storage medium
US20210281656A1 (en) Applying application-based policy rules using a programmable application cache
CN115391058A (en) SDN-based resource event processing method, resource creating method and system
CN114520780A (en) Access method and device for proxy server
CN114070889A (en) Configuration method, traffic forwarding method, device, storage medium, and program product
CN115314448B (en) Method and device for accessing cloud network, electronic equipment and computer medium
CN118170494B (en) K8S-based development environment configuration method and device, electronic equipment and storage medium
EP4274197A1 (en) Data communication managing component and method for performing guaranteed performance data communication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant