CN114793234B - Message processing method, device, equipment and storage medium - Google Patents

Message processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN114793234B
CN114793234B CN202210344371.6A CN202210344371A CN114793234B CN 114793234 B CN114793234 B CN 114793234B CN 202210344371 A CN202210344371 A CN 202210344371A CN 114793234 B CN114793234 B CN 114793234B
Authority
CN
China
Prior art keywords
storage server
server
resource
target
storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210344371.6A
Other languages
Chinese (zh)
Other versions
CN114793234A (en
Inventor
谷崇明
吴永强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202210344371.6A priority Critical patent/CN114793234B/en
Publication of CN114793234A publication Critical patent/CN114793234A/en
Application granted granted Critical
Publication of CN114793234B publication Critical patent/CN114793234B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1074Peer-to-peer [P2P] networks for supporting data block transmission mechanisms

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The disclosure provides a message processing method, a device, equipment and a storage medium, relates to the technical field of computers, in particular to the artificial intelligence field of big data, cloud computing and the like, and can be applied to a media cloud scene. The message processing method comprises the following steps: responding to a resource request message for acquiring a target resource, and determining a target storage server corresponding to the target resource; if the target storage server is a first storage server and the first storage server is not a storage server of a preset type, adding a server identifier of a second storage server in the resource request message, wherein the second storage server is the storage server of the preset type and the first storage server and the second storage server are positioned in the same local area network; and sending a resource request message added with the server identification of the second storage server to the first storage server. The method and the device can reduce resource expenditure and improve response speed.

Description

Message processing method, device, equipment and storage medium
Technical Field
The disclosure relates to the technical field of computers, in particular to the artificial intelligence field of big data, cloud computing and the like, and can be applied to a media cloud scene, in particular to a message processing method, a device, equipment and a storage medium.
Background
The content delivery network (Content Delivery Network, CDN) is an intelligent virtual network constructed on the basis of the existing network, and users can obtain required content nearby through load balancing, content delivery, scheduling and other functional modules, so that network congestion is reduced, and the access response speed and hit rate of the users are improved. CDNs may be divided into multiple tiers, each of which may include at least one load balancing server and at least one storage server.
When a user obtains a resource through the CDN, after receiving the resource request message, the load balancing server may determine a target storage server based on a hash algorithm and a resource identifier in the resource request message, where the target storage server may be referred to as a hash server. In order to realize load balancing, the load balancing server can also set a balancing strategy to disperse the same resource request to the non-hash server.
The resource request message is scattered to the non-hash server, the non-hash server does not store the target resource, and the non-hash server acquires the target resource through the parent layer and feeds back the target resource to the user.
Disclosure of Invention
The present disclosure provides a message processing method, apparatus, device and storage medium.
According to an aspect of the present disclosure, there is provided a message processing method, including: responding to a resource request message for acquiring a target resource, and determining a target storage server corresponding to the target resource; if the target storage server is a first storage server and the first storage server is not a storage server of a preset type, adding a server identifier of a second storage server in the resource request message, wherein the second storage server is the storage server of the preset type and the first storage server and the second storage server are positioned in the same local area network; and sending a resource request message added with the server identifier of the second storage server to the first storage server, wherein the resource request message added with the server identifier of the second storage server is used for triggering the first storage server to acquire the target resource from the second storage server based on the server identifier of the second storage server.
According to another aspect of the present disclosure, there is provided a local area network system comprising: the load balancing server is used for responding to a resource request message for acquiring a target resource and determining a target storage server corresponding to the target resource; if the target storage server is a first storage server and the first storage server is not a storage server of a preset type, adding a server identifier of a second storage server in the resource request message, wherein the second storage server is the storage server of the preset type; and sending a resource request message to which a server identifier of the second storage server is added to the first storage server; a first storage server, configured to forward, in response to the resource request message added with the server identifier of the second storage server, the resource request message to the second storage server based on the server identifier of the second storage server; and sending the target resource sent by the second storage server to the load balancing server; the second storage server is configured to obtain the target resource in response to the resource request message, and send the target resource to the first storage server.
According to another aspect of the present disclosure, there is provided a message processing apparatus including: the determining module is used for responding to a resource request message for acquiring a target resource and determining a target storage server corresponding to the target resource; the adding module is used for adding a server identifier of a second storage server in the resource request message if the target storage server is a first storage server and the first storage server is not a storage server of a preset type, wherein the second storage server is a storage server of the preset type and the first storage server and the second storage server are positioned in the same local area network; and the sending module is used for sending a resource request message added with the server identifier of the second storage server to the first storage server, wherein the resource request message added with the server identifier of the second storage server is used for triggering the first storage server to acquire the target resource from the second storage server based on the server identifier of the second storage server.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the above aspects.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method according to any one of the above aspects.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method according to any of the above aspects.
According to the technical scheme, resource expenditure can be reduced, and response speed is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an application scenario according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 4 is a schematic diagram according to a third embodiment of the present disclosure;
FIG. 5 is a schematic diagram according to a fourth embodiment of the present disclosure;
FIG. 6 is a schematic diagram according to a fifth embodiment of the present disclosure;
fig. 7 is a schematic diagram of an electronic device for implementing any of the message processing methods of embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the related art, the resource request message is scattered to the non-hash server, and the non-hash server does not store the target resource, and the non-hash server acquires the target resource through its parent layer and feeds back to the user.
However, the non-hash server obtains the target resource through the parent layer, which increases bandwidth overhead, resulting in higher cost and slower response.
In order to reduce resource overhead and increase response speed, the present disclosure provides the following embodiments.
Fig. 1 is a schematic diagram of a first embodiment of the present disclosure, where the present embodiment provides a message processing method, the method includes:
101. and responding to the resource request message for acquiring the target resource, and determining a target storage server corresponding to the target resource.
102. If the target storage server is a first storage server and the first storage server is not a storage server of a preset type, adding a server identifier of a second storage server in the resource request message, wherein the second storage server is the storage server of the preset type and the first storage server and the second storage server are located in the same local area network.
103. And sending a resource request message added with the server identifier of the second storage server to the first storage server, wherein the resource request message added with the server identifier of the second storage server is used for triggering the first storage server to acquire the target resource from the second storage server based on the server identifier of the second storage server.
In this embodiment, if the first storage server does not store the target resource, the server identifier of the second storage server is added to the resource request message, so that the target resource can be preferentially acquired from the second storage server.
For a better understanding of the disclosed embodiments, the application scenario is explained.
As shown in fig. 2, CDNs may be divided into multiple tiers, each tier may include at least one load balancing server and at least one storage server, an upper tier of which may be referred to as a parent tier 202 for a particular tier 201 of CDNs.
For a certain hierarchy of CDNs, as shown in fig. 2, it is assumed that this hierarchy includes two load balancing servers (denoted by load balancer-1 and load balancer-2, respectively) and three storage servers (denoted by storage server-1, storage server-2, and storage server-3, respectively).
When a user needs to access a certain resource, a resource request message can be sent through a user terminal, the resource request message can contain a resource identifier of the resource to be accessed, the resource to be accessed can be called a target resource, and the target resource comprises: web pages, games, applications, audio-video, or text, etc.
The user terminal may include: personal computer (Personal Computer, PC), mobile device, smart home device, wearable device, etc., mobile device includes cell phone, portable computer, tablet computer, etc., smart home device includes smart speaker, smart television, etc., and wearable device includes smart watch, smart glasses, etc.
The resource identification of the resource to be accessed may be a uniform resource locator (Uniform Resource Locator, URL) address of the resource.
After the load balancing server receives the resource request message, as shown in fig. 2, taking the load balancing server-2 receiving the resource request message as an example, the load balancing server may determine a target storage server based on a preset algorithm, where the target storage server refers to a storage server serving as a forwarding destination.
Generally, the load balancing server may determine the target storage server based on a hash algorithm, and at this time, the load balancing server may perform a hash operation on the resource identifier in the resource request message to obtain a hash value, and determine the target storage server based on the hash value. As shown in fig. 2, it is assumed that the storage server-3 is a target storage server determined based on a hash algorithm. The target storage server determined based on the hash algorithm may be referred to as a hash server.
Because the hash algorithm forwards the resource request message of the same resource to the same storage server, aiming at some hot spot resources, namely, the resources with higher access quantity in a certain period, the access pressure of the storage server corresponding to the hot spot resources is higher. In order to reduce the access pressure of the target storage server determined by the hash algorithm, an equalization algorithm may be preset, and the resource request message of the hot spot resource may be dispersed to the non-hash server. The non-hash server may be one or more.
It may be understood that, for example, the number of storage servers to be dispersed may be determined based on the number of resource request messages accessing the same resource in a preset period of time, and then the number of storage servers may be randomly selected as the non-hash servers. Referring to fig. 2, the storage server-2 is taken as an example of a non-hash server to be dispersed.
Assuming that the resource request message is sent to the non-hash server (e.g., the storage server-2), if the non-hash server does not store the target resource, in the related art, the non-hash server will acquire the target resource through its parent layer and then feed back to the user.
However, because the layering of the non-hash server and the parent layer thereof have cross-layering, bandwidth overhead is increased, cost is increased, and response speed is slow.
It will be appreciated that the above description of the scenario is merely an exemplary illustration that facilitates an understanding of embodiments of the present disclosure, and that the implementation of embodiments of the present disclosure is not limited to the above scenario, but may be applied to any applicable scenario.
The message processing method of the present embodiment is described below in conjunction with the above-described scene example:
the message processing method of the present embodiment can be applied to a load balancing server.
The target resource refers to a resource to be accessed (or, to be acquired), such as resource a in fig. 2.
The resource request message may be sent by the user terminal when the user needs to acquire the target resource (e.g. resource a).
The resource request message may include a resource identifier of the target resource, such as a URL address of resource a.
In the load balancing server receiving the resource request message, the target storage server may be determined from the candidate storage servers.
The candidate storage server and the load balancing server are located in the same lan, for example, a CDN is taken as an example, and as shown in fig. 2, the same lan may be a hierarchy 201 of CDNs.
The candidate storage servers may be determined by the load balancing server based on a preset policy, for example, a hash server may be determined based on a hash algorithm, and a non-hash server may be determined based on the balancing algorithm. As shown in table 1, for different target resources, candidate storage servers corresponding to the target resources may be determined.
TABLE 1
Figure BDA0003575835580000061
Figure BDA0003575835580000071
The candidate storage servers can comprise a hash server and a non-hash server aiming at a certain target resource, wherein the hash server refers to a storage server determined based on a hash algorithm, and the non-hash server refers to other storage servers for realizing load balancing determination.
Referring to Table 1, taking resource A as an example, c 31 is a hash server corresponding to resource A, and c 30 and c 32 are non-hash servers corresponding to resource A.
After the load balancing server receives the resource request message, the target storage server may be determined among the candidate storage servers, e.g., for resource a, the target storage server may be determined among c [30], c [31], c [32 ].
The present embodiment is not limited as to how to determine the target storage server among the candidate storage servers, and for example, one may be selected at random.
The local area network is a network of a local area, generally belongs to an internal network, and has the characteristics of high response speed, low bandwidth overhead, low cost and the like.
Taking CDN as an example, the local area network may be a hierarchy of CDNs.
CDNs may be divided into multiple layers, such as may include: edge node layers (e.g., county and city level), regional node layers (e.g., common to several provinces), central node layers (e.g., nationwide set up several), etc. The father layer of the edge node layer is an area node layer, and the father layer of the area node layer is a central node layer. The central node layer may communicate with a source station of the target resource.
Taking an edge node layer at a certain county level as an example, after a load balancing server in the edge node layer receives a resource request message, a target storage server can be determined in storage servers in the edge node layer.
Taking a CDN as an example, the same local area network is a hierarchical layer of the CDN, the second storage server is a hash server, the hash server is a storage server determined based on a hash algorithm, and the first storage server is a non-hash server. That is, the preset type of server is a hash server.
Taking Table 1 as an example, for resource A, the first storage server may be c [30] or c [32], and the second storage server may be c [31].
Taking fig. 2 as an example, assuming that the target storage server determined by the load balancing server-2 is the storage server-2, the load balancer can learn whether the candidate storage server is a hash server, and since the storage server-2 is a non-hash server, the load balancer-2 can add the server identifier of the hash server in the resource request message. The load balancer-2 may then send a resource request message to the non-hash server that adds the server identifier of the hash server, and the non-hash server may communicate with the hash server based on the server identifier of the hash server, and obtain the target resource through the hash server.
In this embodiment, for one hierarchical layer of the CDN, the load balancer may add, when the determined target storage server is a non-hash server, a server identifier of the hash server to the resource request message, and further, the non-hash server may obtain the target resource from the hash server, which may reduce bandwidth overhead, reduce cost, and improve response speed relative to a manner of obtaining the target resource from the parent layer.
Fig. 3 is a schematic flow chart of a third embodiment of the disclosure, which takes a hierarchical layer of a CDN as an example, where the hierarchical layer includes a load balancing server, a hash server, and a non-hash server, and a parent layer of the hierarchical layer.
As shown in fig. 3, the present embodiment provides a message processing method, including:
301. and the load balancing server receives the resource request message containing the resource identifier.
Where the resource identification is, for example, the URL address of the target resource.
302. And the load balancing server determines the target storage server.
The load balancing server may determine the target storage server from the plurality of candidate storage servers based on a preset policy, for example, one candidate storage server may be randomly selected from the plurality of candidate storage servers as the target storage server.
The candidate storage servers may include a hash server and a non-hash server. The hash server refers to a storage server determined based on a hash algorithm and the URL address of the target resource, and the non-hash server refers to other storage servers except the hash server.
303. The load balancing server determines whether the target storage server is a hash server, if so, executes 304, otherwise, executes 305.
For example, referring to table 1, if the target resource is resource a, if the target storage server is c [31], it is determined that the target storage server is a hash server, otherwise it is a non-hash server.
304. And the load balancing server acquires the target resource from the hash server.
For example, the load balancing server may send the resource request message containing the resource identifier to the hash server, and the hash server may send the target resource corresponding to the resource identifier stored by the hash server to the load balancing server, or if the target resource is not stored on the hash server, the hash server may acquire the resource through the parent layer, for example, after acquiring the target resource through a source returning operation, feedback the target resource to the load balancing server.
The load balancing server obtains the target resource, including obtaining the target resource from the hash server, or obtaining the target resource from the non-hash server subsequently, and feeding back the target resource to the user.
305. And the load balancing server adds the server identification of the hash server in the resource request message.
The server identifier may be a server number, or an IP address of a server, which is not limited in this embodiment. The load balancing server may pre-configure the candidate storage server and its corresponding server number, the IP address of the server, etc.
306. And the load balancing server sends the resource request message containing the resource identifier and the server identifier of the hash server to the target storage server, namely the non-hash server.
307. And the non-hash server judges whether the non-hash server stores the target resource corresponding to the resource identifier or not after receiving the resource request message containing the resource identifier and the server identifier of the hash server, if so, the non-hash server executes 308, otherwise, the non-hash server executes 309.
308. And the non-hash server sends the target resources stored by the non-hash server to the load balancing server.
309. And the non-hash server is used for sending a resource request message containing the resource identifier to the hash server based on the server identifier of the hash server.
The resource request message may further include a server identifier of the hash server.
310. After receiving the resource request message containing the resource identifier, the hash server determines whether to store the target resource corresponding to the resource identifier, if yes, executing 311, otherwise, executing 312.
311. The hash server sends the target resources stored by the hash server to the non-hash server, and the non-hash server can send the target resources to the load balancing server.
312. The hash server acquires the target resource through the father layer and sends the acquired target resource to the non-hash server, and the non-hash server can send the target resource to the load balancing server.
For example, if the target resource is already stored on a node within the parent layer, the target resource may be obtained from the node within the parent layer. Or the parent layer is used for going up layer by layer until the source station node corresponding to the target resource is reached, namely, the source returning operation is executed to acquire the target resource.
In addition, after the non-hash server acquires the target resource from the hash server, the acquired target resource can be stored locally, namely on the non-hash server, so that the target resource stored by the non-hash server can be fed back to the user when the user requests the target resource again.
In this embodiment, the non-hash server may preferentially obtain the target resource from the hash server, and since the non-hash server and the hash server are located in the same layer of the CDN, bandwidth overhead may be saved, cost may be reduced, and response speed may be improved.
Fig. 4 is a schematic diagram of a fourth embodiment of the present disclosure, which provides a lan system. As shown in fig. 4, the system 400 includes: a load balancing server 401, a first storage server 402, and a second storage server 403.
The load balancing server 401 is configured to determine, in response to a resource request message for acquiring a target resource, a target storage server corresponding to the target resource; if the target storage server is a first storage server and the first storage server is not a storage server of a preset type, adding a server identifier of a second storage server in the resource request message, wherein the second storage server is the storage server of the preset type; and, sending a resource request message to the first storage server 402, adding a server identification of the second storage server; the first storage server 402 is configured to forward, in response to the resource request message added with the server identifier of the second storage server, the resource request message to the second storage server 403 based on the server identifier of the second storage server; and sending the target resource sent by the second storage server to the load balancing server; the second storage server 403 is configured to obtain the target resource in response to the resource request message, and send the target resource to the first storage server 402.
In this embodiment, if the first storage server does not store the target resource, the server identifier of the second storage server is added to the resource request message, so that the target resource can be preferentially acquired from the second storage server.
In some embodiments, the second storage server 403 is specifically configured to: and if the target resource is stored in the second storage server, acquiring the target resource in the second storage server.
By acquiring the target resource from the second storage server, the response speed can be improved relative to the manner in which the target resource is acquired from the parent layer.
In some embodiments, the second storage server 403 is specifically configured to: and if the target resource is not stored in the second storage server, acquiring the target resource from other networks outside the local area network.
By acquiring the target resource from other networks, the normal acquisition of the target resource can be ensured.
In some embodiments, the second storage server 403 is further configured to: and storing the target resources acquired from the other networks.
By storing the target resource, the target resource can be directly fed back from the self in the subsequent flow, so that the response speed is improved.
In some embodiments, referring to fig. 5, the local area network is a hierarchy of CDNs, denoted by a CDN layer 500 in fig. 5, and accordingly, the CDN layer may include a load balancing server 501, where the second storage server is a hash server 503, the hash server is a storage server determined based on a hash algorithm, and the first storage server is a non-hash server 502.
The method is applied to one layering of the CDN, can be applied to the CDN, can optimize the acquisition path of the hot spot resource aiming at the hot spot resource of the CDN, reduces the cost and improves the response speed.
Fig. 6 is a schematic diagram of a sixth embodiment of the present disclosure, where a message processing apparatus 600 includes: a determining module 601, an adding module 602 and a transmitting module 603.
The determining module 601 is configured to determine, in response to a resource request message for obtaining a target resource, a target storage server corresponding to the target resource; the adding module 602 is configured to add, if the target storage server is a first storage server and the first storage server is not a storage server of a preset type, a server identifier of a second storage server in the resource request message, where the second storage server is a storage server of the preset type and the first storage server and the second storage server are located in the same local area network; the sending module 603 is configured to send a resource request message for adding a server identifier of the second storage server to the first storage server, where the resource request message for adding the server identifier of the second storage server is used to trigger the first storage server to acquire the target resource from the second storage server based on the server identifier of the second storage server.
In this embodiment, if the first storage server does not store the target resource, the server identifier of the second storage server is added to the resource request message, so that the target resource can be preferentially acquired from the second storage server.
In some embodiments, the same lan is a hierarchy of CDNs, the second storage server is a hash server, the hash server is a storage server determined based on a hash algorithm, and the first storage server is a non-hash server.
The method is applied to one layering of the CDN, can be applied to the CDN, can optimize the acquisition path of the hot spot resource aiming at the hot spot resource of the CDN, reduces the cost and improves the response speed.
It is to be understood that in the embodiments of the disclosure, the same or similar content in different embodiments may be referred to each other.
It can be understood that "first", "second", etc. in the embodiments of the present disclosure are only used for distinguishing, and do not indicate the importance level, the time sequence, etc.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 7 illustrates a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile apparatuses, such as personal digital assistants, cellular telephones, smartphones, wearable devices, and other similar computing apparatuses. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the electronic device 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 707 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the electronic device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the electronic device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, etc.; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, an optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the electronic device 700 to exchange information/data with other devices through a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 701 performs the respective methods and processes described above, such as a message processing method. For example, in some embodiments, the message processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 700 via the ROM 702 and/or the communication unit 709. When a computer program is loaded into RAM 703 and executed by computing unit 701, one or more steps of the message processing method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the message processing method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable load balancing apparatus, such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (11)

1. A message processing method, comprising:
responding to a resource request message for acquiring a target resource, and determining a target storage server corresponding to the target resource;
if the target storage server is a first storage server and the first storage server is not a storage server of a preset type, adding a server identifier of a second storage server in the resource request message, wherein the second storage server is the storage server of the preset type and the first storage server and the second storage server are positioned in the same local area network;
and sending a resource request message added with the server identifier of the second storage server to the first storage server, wherein the resource request message added with the server identifier of the second storage server is used for triggering the first storage server to acquire the target resource from the second storage server based on the server identifier of the second storage server.
2. The method of claim 1, wherein the same local area network is a hierarchical layer of a content delivery network CDN, the second storage server is a hash server, the hash server is a storage server determined based on a hash algorithm, and the first storage server is a non-hash server.
3. A local area network system, comprising:
the load balancing server is used for responding to a resource request message for acquiring a target resource and determining a target storage server corresponding to the target resource; if the target storage server is a first storage server and the first storage server is not a storage server of a preset type, adding a server identifier of a second storage server in the resource request message, wherein the second storage server is the storage server of the preset type; and sending a resource request message to which a server identifier of the second storage server is added to the first storage server;
a first storage server, configured to forward, in response to the resource request message added with the server identifier of the second storage server, the resource request message to the second storage server based on the server identifier of the second storage server; and sending the target resource sent by the second storage server to the load balancing server;
and the second storage server is used for responding to the resource request message, acquiring the target resource and sending the target resource to the first storage server.
4. The system of claim 3, wherein the second storage server is specifically configured to:
and if the target resource is stored in the second storage server, acquiring the target resource in the second storage server.
5. The system of claim 3, wherein the second storage server is specifically configured to:
and if the target resource is not stored in the second storage server, acquiring the target resource from other networks outside the local area network.
6. The system of claim 5, wherein the second storage server is further configured to:
and storing the target resources acquired from the other networks.
7. The system according to any of claims 3-6, wherein the local area network is a hierarchy of content delivery networks CDN, the second storage server is a hash server, the hash server is a storage server determined based on a hash algorithm, and the first storage server is a non-hash server.
8. A message processing apparatus, comprising:
the determining module is used for responding to a resource request message for acquiring a target resource and determining a target storage server corresponding to the target resource;
the adding module is used for adding a server identifier of a second storage server in the resource request message if the target storage server is a first storage server and the first storage server is not a storage server of a preset type, wherein the second storage server is a storage server of the preset type and the first storage server and the second storage server are positioned in the same local area network;
and the sending module is used for sending a resource request message added with the server identifier of the second storage server to the first storage server, wherein the resource request message added with the server identifier of the second storage server is used for triggering the first storage server to acquire the target resource from the second storage server based on the server identifier of the second storage server.
9. The apparatus of claim 8, wherein the same local area network is a hierarchy of a content delivery network CDN, the second storage server is a hash server, the hash server is a storage server determined based on a hash algorithm, and the first storage server is a non-hash server.
10. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-2.
11. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-2.
CN202210344371.6A 2022-03-31 2022-03-31 Message processing method, device, equipment and storage medium Active CN114793234B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210344371.6A CN114793234B (en) 2022-03-31 2022-03-31 Message processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210344371.6A CN114793234B (en) 2022-03-31 2022-03-31 Message processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114793234A CN114793234A (en) 2022-07-26
CN114793234B true CN114793234B (en) 2023-04-25

Family

ID=82462059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210344371.6A Active CN114793234B (en) 2022-03-31 2022-03-31 Message processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114793234B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108337327A (en) * 2018-04-26 2018-07-27 拉扎斯网络科技(上海)有限公司 A kind of resource acquiring method and proxy server
WO2019057212A1 (en) * 2017-09-22 2019-03-28 中兴通讯股份有限公司 Method, apparatus and device for scheduling service within cdn node, and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9882976B1 (en) * 2015-06-16 2018-01-30 Amazon Technologies, Inc. Supporting heterogeneous environments during code deployment
CN110300184B (en) * 2019-07-10 2022-04-01 深圳市网心科技有限公司 Edge node distribution method, device, scheduling server and storage medium
CN112866310B (en) * 2019-11-12 2022-03-04 北京金山云网络技术有限公司 CDN back-to-source verification method and verification server, and CDN cluster
CN113342517A (en) * 2021-05-17 2021-09-03 北京百度网讯科技有限公司 Resource request forwarding method and device, electronic equipment and readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019057212A1 (en) * 2017-09-22 2019-03-28 中兴通讯股份有限公司 Method, apparatus and device for scheduling service within cdn node, and storage medium
CN108337327A (en) * 2018-04-26 2018-07-27 拉扎斯网络科技(上海)有限公司 A kind of resource acquiring method and proxy server

Also Published As

Publication number Publication date
CN114793234A (en) 2022-07-26

Similar Documents

Publication Publication Date Title
US11012892B2 (en) Resource obtaining method, apparatus, and system
US20230161541A1 (en) Screen projection method and system
CN112437006B (en) Request control method and device based on API gateway, electronic equipment and storage medium
US10178033B2 (en) System and method for efficient traffic shaping and quota enforcement in a cluster environment
CN113656176B (en) Cloud equipment distribution method, device and system, electronic equipment, medium and product
WO2017185615A1 (en) Method for determining service status of service processing device and scheduling device
CN114697391B (en) Data processing method, device, equipment and storage medium
CN113361913A (en) Communication service arranging method, device, computer equipment and storage medium
CN114500633B (en) Data forwarding method, related device, program product and data transmission system
US11463376B2 (en) Resource distribution method and apparatus in Internet of Things, device, and storage medium
CN114911602A (en) Load balancing method, device, equipment and storage medium for server cluster
CN114793234B (en) Message processing method, device, equipment and storage medium
EP4142258A1 (en) Edge computing network, data transmission method and apparatus, device and storage medium
CN115567602A (en) CDN node back-to-source method, device and computer readable storage medium
US10250515B2 (en) Method and device for forwarding data messages
CN105025042A (en) Method of determining data information, system and proxy servers
CN114827159B (en) Network request path optimization method, device, equipment and storage medium
CN115373831A (en) Data processing method, device and computer readable storage medium
CN105072047A (en) Message transmitting and processing method
CN115037803B (en) Service calling method, electronic equipment and storage medium
CN114449031B (en) Information acquisition method, device, equipment and storage medium
CN113992760B (en) Method, device, equipment and storage medium for scheduling back source traffic
CN115086300B (en) Video file scheduling method and device
CN104092735A (en) Cloud computing data access method and system based on binary tree
CN115242733B (en) Message multicast method, multicast gateway, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant