WO2009015549A1 - Système à mémoire cache partagée, son procédé de mise en œuvre et son logiciel de mise en œuvre - Google Patents

Système à mémoire cache partagée, son procédé de mise en œuvre et son logiciel de mise en œuvre Download PDF

Info

Publication number
WO2009015549A1
WO2009015549A1 PCT/CN2008/001146 CN2008001146W WO2009015549A1 WO 2009015549 A1 WO2009015549 A1 WO 2009015549A1 CN 2008001146 W CN2008001146 W CN 2008001146W WO 2009015549 A1 WO2009015549 A1 WO 2009015549A1
Authority
WO
WIPO (PCT)
Prior art keywords
space
cache
shared
request
shared cache
Prior art date
Application number
PCT/CN2008/001146
Other languages
English (en)
French (fr)
Inventor
Zhanming Wei
Original Assignee
Hangzhou H3C Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou H3C Technologies Co., Ltd. filed Critical Hangzhou H3C Technologies Co., Ltd.
Publication of WO2009015549A1 publication Critical patent/WO2009015549A1/zh
Priority to US12/697,376 priority Critical patent/US20100138612A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/084Multiuser, multiprocessor or multiprocessing cache systems with a shared cache

Definitions

  • the present invention relates to the field of communications technologies, and in particular, to a shared cache system, an implementation method, and implementation software. Background technique
  • Data systems in the prior art are generally classified into centralized systems and distributed systems.
  • the main control unit and the service processing unit have respective memory units for storing respective data, and the service processing units respectively have interfaces connected to respective downlink devices; the service processing unit passes through the switching network.
  • the control channel communicates with the main control unit, and the service processing units communicate through the service channel of the switching network.
  • each service processing unit is connected to an interface through a service channel of the switching network, and the service processing unit, the interface, and the main control unit are connected through a control channel of the switching network.
  • the business processing unit includes a control engine, a memory unit, and a stream acceleration engine.
  • the present invention provides a shared cache system and an implementation method for solving the defect that data cannot be directly shared between service processing units in the prior art.
  • the present invention provides a shared cache implementation method for a system including a main control unit and a plurality of service processing units, the method comprising setting a shared cache for the main control unit and the plurality of service processing units, and executing The following steps:
  • each request for the space is mutually exclusive, and data is written to implement mutual sharing of the cache;
  • each request For each operation request for data reading in a space in the shared cache, each request is made to the space to simultaneously read data, and the cache is shared at the same time.
  • the step of performing mutually exclusive write data to each request and implementing mutual sharing of the cache space specifically includes: The request for each write request is queued according to a preset order; when a write request writes to the space, other requests are prohibited from writing or reading to the same space; after the current write request is written Allows subsequent requests to write or read the space.
  • the prohibiting other requests from writing or reading the same space includes: setting a write flag bit to the space setting, and releasing or changing the write flag bit after the end of writing to allow subsequent requests to perform on the space. Write or read.
  • the step of performing mutually exclusive write data on a space in the shared cache further includes: after receiving the write request to the space, if the write data is not received after the preset time exceeds, the write failure information is returned, Write or read other requests.
  • the step of performing data reading at the same time and realizing the buffer space sharing includes: simultaneously reading data of the space according to the read requests, and prohibiting other requests from writing to the same space; After the end of the request readout, the subsequent request to write the space is allowed.
  • the prohibiting other requests from writing to the same space includes: flagging the spatial set read flag, and releasing or changing the read flag after the read is completed to allow subsequent requests to write to the space.
  • the step of performing the shared cache initialization includes: performing the self-test on the shared cache, and after the self-test is completed, the main control unit and each The service processing unit reports status information, including total cache space, free space, unavailable space, and corresponding start and end addresses.
  • the operation request to receive and parse the shared cache includes a space allocation request, a space operation request, a space release request, and a shared space allocation request.
  • the method After receiving and parsing the space allocation request, the method includes: performing spatial allocation according to the request, and performing initialization of the space; wherein the space allocation request is sent to the shared cache by using the following steps: Whether the service is related to the existing space; if yes, the space operation request is sent to the shared cache, otherwise the space allocation request is sent to the shared cache.
  • the receiving and parsing spatial operation request includes a spatial read operation request and a space write operation request of each service processing unit.
  • the method After receiving and parsing the space release request, the method includes: releasing the space release according to the request; wherein, the space release request is sent to the shared cache by using the following steps: each service processing unit reports the space release request to the main control unit; The unit identifies whether all the service processing units related to the space propose a space release request, and then issues a space release request to the shared cache, otherwise, the monitoring of the space release request of each service processing unit is maintained.
  • the method After receiving and parsing the shared space allocation request, the method includes: allocating a shared space, and assigning, to the service processing unit in the group, the right to operate the shared space, where the right includes a read permission and a write permission; and the service processing unit that requests the shared space Obtain read/write permission, write the shared space, and write the target in the group at the same time The receiving address; the writing completion, releasing the read/write authority of the service processing unit, and notifying the target recipient to perform the reading of the shared space.
  • the allocating the shared space further includes releasing the shared space if it is not accessed for more than a predetermined time.
  • the allocating the shared space further includes: releasing the shared space according to a release request of the service processing unit requesting the shared space.
  • the present invention also provides a shared cache implementation software, which is applied to a system including a main control unit and a plurality of service processing units, and the main control unit and the plurality of service processing units are connected to a shared cache;
  • the software performs the following steps:
  • each request for the space is mutually exclusive, and data is written to implement mutual sharing of the cache;
  • each request For each operation request for data reading in a space in the shared cache, each request is made to the space to simultaneously read data, and the cache is shared at the same time.
  • the present invention also provides a shared cache system, including a main control unit and a plurality of service processing units, and a shared cache unit, which is respectively connected to the main control unit and the service processing unit for implementing cache sharing;
  • the cache unit specifically includes:
  • a high-speed interface respectively connected to the main control unit and the plurality of service processing units, for receiving various operation requests for the shared cache unit, and forwarding between the service processing unit and the shared cache unit Data transmitted;
  • Cache array for providing cache space and storing data at high speed
  • a cache controller is connected between the high-speed interface and the cache array, and is configured to perform mutually exclusive write and simultaneous readout of data on the cache array according to the various operation requests, and implement cache sharing.
  • the cache controller specifically includes: an operation identification subunit for parsing an operation request for the shared cache; and a write control subunit for queuing the requests for each request to be written in a preset order, when a write request is placed When writing the space, prohibiting other requests from writing or reading to the same space, and allowing the subsequent request to write or read the space after the current write request is written; a control subunit, configured to simultaneously read data of the space according to each read request, and prohibit other requests from writing to the same space, and, after the read request is read out, allow subsequent requests to the Space write operation.
  • the cache controller further includes: a first aging subunit, connected to the write control subunit, configured to perform an aging refresh on the space write request.
  • the cache controller further includes: a cache self-test subunit, configured to perform initialization on the cache array, and report status information to the main control unit and each service processing unit, including total cache space, available space, and unavailable Space and corresponding start and end addresses.
  • the cache controller further includes: an address mapping subunit, configured to perform address mapping on the high speed interface and the cache array according to the space allocation request received by the operation identifying subunit, and allocate a buffer space; and an address release subunit, And releasing the buffer space according to the space release request received by the operation identifying subunit; wherein, the space release request is sent by the main control unit to the service when all the service processing units related to the space propose a space release request The shared cache is delivered.
  • the cache controller further includes: an extension subunit coupled to the address mapping subunit for extending an addressing space for a cache address in the cache array.
  • the cache controller further includes: a shared space allocation subunit, connected to the address mapping subunit, configured to allocate a shared space for the intra-group business processing unit according to the shared space allocation request received by the operation identifying sub-unit; a subunit, connected to the shared space allocation subunit, configured to allocate, to the service processing unit, a right to operate on the shared space, where the right includes a read permission and a write permission; and, in the service processing unit, the share After the space is operated, the authority allocated to the service processing unit to operate on the shared space is reclaimed; the notification subunit is connected with the operation authority setting subunit, and is used to obtain the target receiver address in the group and after the write operation ends, notify The target recipient reads the cache space.
  • a shared space allocation subunit connected to the address mapping subunit, configured to allocate a shared space for the intra-group business processing unit according to the shared space allocation request received by the operation identifying sub-unit
  • a subunit connected to the shared space allocation subunit, configured to allocate, to the service processing unit,
  • the cache controller further includes: a second aging subunit, connected to the shared space allocation subunit, for periodically refreshing the shared space.
  • the shared cache system is a distributed system or a centralized system.
  • the embodiment of the present invention has the following advantages:
  • the shared cache is set by the main control unit and each service processing unit, and the mutual exclusion function is provided in the cache to ensure the service processing.
  • FIG. 1 is a schematic diagram of an exclusive memory distribution structure in a centralized system in the prior art
  • FIG. 2 is a schematic diagram of an exclusive memory distribution structure in a distributed system in the prior art
  • FIG. 3 is a structural diagram of a centralized system using a shared cache unit in the present invention.
  • FIG. 4 is a structural diagram of a distributed system using a shared cache unit in the present invention.
  • FIG. 6 is a flowchart of initialization of a shared cache system in the present invention.
  • FIG. 8 is a flowchart of implementing an internal mutual exclusion mechanism of a shared cache unit in the present invention.
  • FIG. 9 is a structural diagram of a shared cache unit in the present invention. detailed description Specific embodiments of the present invention will be described in detail below. It should be noted that the embodiments described herein are for illustrative purposes only and are not intended to limit the invention.
  • FIG. 3 A centralized system using a shared cache (cache) unit in the embodiment of the present invention is shown in FIG. 3, and a distributed system using a shared cache unit is shown in FIG.
  • the shared cache unit specifically includes: a high speed interface, a cache controller, and a cache array.
  • High-speed interfaces must be based on reliable connections, such as PCIE (Peripheral Component Interconnect Express), HT (HyperTransport), Rapid IO (Rapid Input and Output), etc., to ensure shared cache units and service processing units from the bottom layer.
  • PCIE Peripheral Component Interconnect Express
  • HT HyperTransport
  • Rapid IO Rapid Input and Output
  • the cache controller is the core module of the shared cache unit.
  • the main functions include: Implementing address mapping between the high-speed interface and the cache array, extending the addressing space, and optionally expanding the cache. Capacity; and when each service processing unit accesses the same cache address, the mutual exclusion function is implemented to ensure the consistency of the cached data; and a timer can be provided to provide automatic cache aging by using each timer time configuration.
  • the cache array is used to store data for the control unit and the business processing unit.
  • the shared cache system of the present invention can flexibly implement various functions.
  • a shared cache is set for the main control unit and the multiple service processing units, and a unified storage space is provided for each service processing unit, so as to ensure that the data processed by different service processing units are consistent.
  • Sex the following example illustrates the implementation process of this method:
  • the step of performing mutual exclusion write data for each request and implementing mutual sharing of the cache space specifically includes: queuing the requests for writing the requirements in a preset order; writing a space when a write request is performed At the time of entry, other requests are prohibited from writing or reading to the same space; after the writing of the current write request is completed, subsequent operations for allowing the space to be written or read are allowed.
  • the step of prohibiting other requests from writing or reading the same space includes: setting a write flag bit to the space setting, and releasing or changing the write flag bit after the writing ends, to allow subsequent requests to the The space is written or read.
  • the method further includes: after receiving the write request to the space, returning to write if the write data is not received after the preset time is exceeded. Failed message, write or read other requests.
  • the steps of reading data at the same time and implementing the buffer space sharing include: simultaneously reading out the data of the space according to the read requests, and prohibiting other requests from writing to the same space; After the end of the request readout, the subsequent request to write the space is allowed.
  • the step of prohibiting other requests from writing to the same space includes: marking the space setting read flag bit, and not allowing other requests to be written, but allowing other requests to be read; The read flag is then released or changed to allow subsequent requests to write to the space.
  • the present invention can ensure the consistency of the operation data of each service processing unit by recognizing the spatial operation request and performing spatial mutual exclusion writing and simultaneous reading.
  • receiving and parsing the operation request for the shared cache further includes a space allocation request and a space release request.
  • the space allocation is performed after the spatial allocation request is received and parsed to ensure further writing and reading.
  • the method further includes: performing spatial allocation according to the request, and performing initialization of the space; wherein, the space allocation request is passed.
  • the service units identify whether a service is related to the existing space; if the space operation request is sent to the shared cache, the space allocation request is sent to the shared cache.
  • Space release is performed after receiving and parsing the space release request to ensure space allocation for subsequent requests. Specifically, the space release is performed according to the request.
  • the space release request is sent to the shared cache by using the following steps: each service processing unit reports the space release request to the main control unit; the main control unit identifies whether the space is related. All the service processing units all propose a space release request, and the space release request is sent to the shared cache, otherwise the monitoring of the space release request of each service processing unit is maintained.
  • FIG. 5 is an application example of the attack statistics according to the present invention, and includes the following steps:
  • Step s501 the service processing unit starts flow-based statistics.
  • Step s502 the flow enters the service processing unit from the interface unit.
  • step s503 the service processing unit determines whether the stream hits the session (session) table, that is, whether the hit is compared with the parameters in the pre-stored session table by using some identifiers in the stream, and if the hit is normal, the packet is changed.
  • Step S51 If there is no hit, it may be an attack message, and the process goes to step s504 to further determine whether it is an attack message.
  • step s504 the service processing unit establishes a new connection, and determines whether the new connection is completed. If it is finished, it is a normal message, then the process goes to step S512; if not, the flow is proved to be an attack flow, and the process goes to step s505.
  • steps s501 to s505 are steps of determining whether a certain flow is an attack flow. After determining the attack flow, performing steps S505 to s51 1, and performing statistics on the attack flow are stored in the shared cache unit.
  • step s505 the service processing unit queries whether the cache has allocated the connection-related space, and if yes, proceeds to step s510 ; if not, proceeds to step s506.
  • Step s506 The service processing unit requests a cache space for the connection.
  • Step s507 the service processing unit determines whether there is enough cache space, if not, then the step Step s528; If yes, go to step s508.
  • Step s508 the service processing unit allocates a buffer space for the connection, where the space includes a cache start address and an address length.
  • Step s509 the service processing unit initializes the cache space, that is, clears the cache space.
  • Step s510 the service processing unit writes the allocated shared cache space based on the counts of various statistics, and proceeds to step s51 8 .
  • Step s51 1 It is stated that a connection has been established, and the service processing unit performs the session operation.
  • Step s512 the service processing unit reports the main control unit, and proceeds to step s51 3.
  • Step s513 The main control unit detects whether all the service processing units have completed the new connection related to the connection, and if not, continues to detect; if yes, the process proceeds to step s514.
  • Step s5144 the main control unit sends a release command to the shared cache unit.
  • Step s515 The shared cache unit receives the release command and the address that needs to be released.
  • Step s5166 the shared cache unit releases the cache of the corresponding address, which is available for redistribution.
  • Step s517 The shared cache unit returns release success information to the main control unit.
  • the function of the above steps s512 to s517 is to determine that no corresponding attack message is released before the corresponding shared cache is released, so as to store other data.
  • Step s518, the shared cache unit receives the write command, the data to be written, and the address from the service processing unit. Step s519, the shared cache unit starts a write operation timer.
  • Step s520 The shared cache unit determines whether the address flag can be written. If yes, the process goes to step S522; if not, the process goes to step s521.
  • Step s521 the shared buffer unit determines whether the timer expires. If not, the process proceeds to step s520; if it times out, the process proceeds to step s527.
  • Step s522 the shared cache unit sets the address flag to be unwritable.
  • Step s524 the shared buffer unit reads the original data in the address and adds the data to be written, and writes the added sum to the address space.
  • Step s527 the shared cache unit release timer.
  • Step s528 The shared cache unit returns a write failure message to the service processing unit.
  • steps s51 8 to s528 are a process of writing statistical data to the shared cache unit, wherein how to maintain data operation consistency by using the mutual exclusion mechanism in the present invention is specifically explained.
  • steps s501 to s512 are processing flows in the service processing unit
  • steps S5 to S512 are processing flows in the main control unit
  • steps s515 to s528 are processing flows in the shared cache unit.
  • a writable flag bit is provided for each allocated buffer space.
  • the flag bit When the flag bit is busy, it indicates that other units are operating the buffer space, and need to wait to ensure data consistency. However, when reading operations, there is no need for mutual exclusion. Multiple units can be read at the same time, ensuring the rate of reading data and speeding up the real-time processing of data.
  • System initialization is also required before statistics, as shown in Figure 6, including the following steps: s601, system startup, initialization.
  • the shared cache unit performs a self-test.
  • the shared cache unit reports status information to the main control unit and each service processing unit.
  • the status information includes: total cache size, start and end address; available cache capacity, start and end address; unavailable cache capacity, start and end address. The initialization ends after the status information is reported.
  • the shared cache implementation method provided by the present invention ensures the consistency of operational data by setting a shared cache. Further, the shared cache implementation method provided by the present invention can also ensure high-speed exchange of shared data between service processing units, and realize high-speed data sharing.
  • the shared cache unit performs after receiving and parsing the shared space allocation request:
  • the read permission includes a read permission and a write permission
  • the service processing unit requesting the shared space acquires the read/write permission, performs the writing of the shared space, and simultaneously writes the address of the target recipient in the group;
  • the writing is completed, the read/write permission of the service processing unit is released, and the target recipient is notified to perform the reading of the shared space.
  • the shared space is released if the predetermined time is not accessed.
  • the method further includes: releasing the shared space according to a release request of the service processing unit requesting the shared space.
  • the cache sharing unit is not fixed, but is applied according to requirements.
  • the service processing unit 1 needs to initiate data access of the service processing units 3 and 4, define the size of the required cache space, and interact with the data format.
  • Control unit application The implementation process includes the following steps:
  • step s701 the cache request is shared. It is assumed that the service processing units 1, 3, and 4 need to perform high-speed data interaction, and the service processing unit sends an application message to the main control unit, where the application message includes: a member of the shared cache group, such as a service processing unit. 1, 3, 4; The shared cache size; interactive data format.
  • Step S702 the main control unit receives the application message, and queries whether the buffer unit has enough space. If yes, the process goes to step S704; if not, the process goes to step s703. Step s703, returning a failure message to the service processing unit I, and sending the alarm information.
  • Step s704 The shared cache unit allocates a cache base address and a size; and establishes a service processing unit 1, 3, and 4 permission flag table, and initializes to no read/write permission.
  • Step s705 The shared cache unit returns a message to the main control unit, where the return message includes a shared cache base address and a size, and an address of the shared cache group permission flag table.
  • steps s701 to S705 illustrate the process of the service processing unit that initiates the shared cache operation obtaining the corresponding cache space.
  • Step s706 the main control unit sends a message to the service processing unit 3, 4, the message includes: the members in the shared cache group are service processing units 1, 3, 4; the shared cache base address and size; the shared cache group permission flag The address of the table; the interactive data format.
  • Step s707 whether the service processing unit 3, 4 receives, if not, then proceeds to step s706 to notify the main control unit to resend the message; if yes, proceeds to step s708.
  • Step s708 the main control unit returns a message to the service processing unit 1, the message includes: the shared cache base address and the size; the address of the shared cache group permission flag table.
  • step s709 the service processing unit 1 receives, if not, proceeds to step s708 to notify the master unit to resend the message; if yes, proceeds to step s810.
  • Step s710 the business processing units 1, 3, 4 start data interaction.
  • Step s71 The service processing unit 1 acquires the read and write permission of the allocated cache space.
  • Step s712 the service processing unit 1 writes to the allocated cache space.
  • Step s713 the service processing unit 1 releases the right to read and write.
  • steps s708 to s713 illustrate the process of reading and writing the shared cache unit by the service processing unit that initiates the shared cache in the group.
  • the shared cache unit notifies the target service processing unit, for example, the data is shared to the intra-group service processing unit 3.
  • the shared cache unit sends a message to the service processing unit 3, and notifies the service processing unit 3 that the cache unit has a cache unit. Share the data for him.
  • the data can also be shared to the service processing units 3 and 4 in the group at the same time, so that the cache control unit simultaneously sends a message to the service processing units 3 and 4, and the service processing units 3 and 4 respectively acquire the rights and read the data.
  • Step s715 The service processing unit 3 acquires the read/write permission of the cache space.
  • step s716 the service processing unit 3 reads the data of the cache space.
  • Step s717 the service processing unit 3 releases the read/write permission of the cache space.
  • steps s714 to s717 illustrate the process in which other service processing units in the group share data in the shared cache unit.
  • Steps s702, s703, s706, and s708 in FIG. 7 are the main control unit processing flow; steps s704, s705, and s814 Process the processing for the shared cache unit; the other is the processing flow for the business processing unit.
  • the same service processing unit allows multiple buffer spaces to be applied and data interaction with different service processing units.
  • the service processing unit applies for success and the service processing unit 3, 4 caches the space, it can also apply for the cache sharing space with the service processing units 2, 5, or even within the same group (the service processing units 1, 3, 4 and the service processing unit). 1, 2, 5), you can apply for multiple cache spaces for interaction of different types of data.
  • the service processing unit 1 Since the number of members of the shared cache exceeds two, when the service processing unit 1 writes data to the allocated cache space, it must write the target receiver in the group, whether it is the service processing unit 3 or the service processing unit 4, or both. It is shared with the service processing units 3 and 4. After the service processing unit 1 writes the data and releases the read/write permission of the cache space, the cache controller needs to send a message to the receiver instead of using the polling method to further enhance the data. The efficiency of the interaction.
  • the service processing unit After the shared cache space is used, the principle of who is applied, and who releases it, for example, the service processing unit applies to the cache sharing space of the service processing unit 3, 4, and after the use is completed, the service processing unit 1 initiates a release message to the main control unit. After receiving the master unit, the master unit issues a release command to other units sharing the cache space, and the required shared cache unit releases the space.
  • the shared cache unit maintains each allocated cache space for a certain period of time, without access, aging the space, and notifying each service processing unit and the master unit that use the cache space.
  • the above shared space also follows the mechanism of mutual exclusion write and simultaneous readout, as shown in FIG. 8, which includes the following steps - step s801, starting the shared cache mutual exclusion mechanism.
  • a read/write flag is set (in the process, 0x55 has no read/write permission, Oxaa has read/write permission, as shown in Table 1, in actual application, the read/write permission setting value You can set it as you like. To read and write, you must first obtain read and write permissions to ensure the consistency of the data in the cache. After reading and writing, you must release the right to read and write. Otherwise, you will make a deadlock and the data cannot be shared.
  • Step s802 initializing the cache.
  • Step s803 all shared cache area read and write permissions default to 0x55.
  • Step s804 the service processing unit 1 wishes to write a shared cache area.
  • step s805 the permission flag of the write service processing unit 1 is 0xaa.
  • step s807 the permission flag of the service processing unit 1 is set to 0xaa, and the process proceeds to step s809.
  • step s808 the permission flag of the service processing unit 1 is set to 0x55, and the process proceeds to step s809.
  • Step s809 reading the permission flag of the service processing unit 1.
  • step s810 it is judged whether it is 0xaa, if yes, the process goes to step s81 1; if not, the process goes to step s805.
  • the service processing unit 1 obtains read and write rights, and can read and write the shared cache area.
  • Step s812 the service processing unit 1 reads and writes, the permission flag is set to 0x55, and the read and write permissions are released, so as to avoid deadlock.
  • the present invention can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is a better implementation. the way.
  • the technical solution of the present invention which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium, including a plurality of instructions for causing a A computer device (which may be a personal computer, server, or network device, etc.) performs the methods of various embodiments of the present invention.
  • the present invention also provides a shared cache implementation software, which is applied to a system including a main control unit and a plurality of service processing units, and the main control unit and the plurality of service processing units are connected to a shared cache; the sharing The cache implementation software performs the following steps:
  • each request for the space is mutually exclusive, and data is written to implement mutual sharing of the cache;
  • each request For each operation request for data reading in a space in the shared cache, each request is made to the space to simultaneously read data, and the cache is shared at the same time.
  • the present invention provides a shared cache system, including a main control unit and a plurality of service processing units, and a shared cache unit, which is respectively connected to the main control unit and the service processing unit for implementing cache sharing, as shown in FIG. 4 and Figure 5 shows.
  • the cache sharing unit specifically includes: a high-speed interface 100, which is respectively connected to the main control unit and the plurality of service processing units, and is configured to receive, by the plurality of service processing units and the main control unit, the shared cache. Various operation requests of the unit, forwarding data transmitted between the service processing unit and the shared cache unit; a cache array 300 for providing a cache space, storing high speed data; a cache controller 200, connected to the high speed interface 100 and the cache Between arrays 300, used to implement cache sharing.
  • a high-speed interface 100 which is respectively connected to the main control unit and the plurality of service processing units, and is configured to receive, by the plurality of service processing units and the main control unit, the shared cache.
  • Various operation requests of the unit forwarding data transmitted between the service processing unit and the shared cache unit
  • a cache array 300 for providing a cache space, storing high speed data
  • a cache controller 200 connected to the high speed interface 100 and the cache Between arrays 300, used to implement cache sharing.
  • the cache controller 200 specifically includes: an operation identification subunit 210, configured to parse an operation request for the shared cache; and a write control subunit 220 configured to queue the requests for each write request in a preset order, when writing When requesting to write to the space, other requests are prohibited from writing or reading to the same space, and after the current write request is written, subsequent operations are allowed to write or read the space.
  • the read control sub-unit 230 is configured to simultaneously read data of the space according to each read request, and prohibit other requests from writing to the same space, and, after the read request is read, allow subsequent Requesting an operation to write to the space; first aging
  • the subunit 240 is connected to the write control subunit 220 for performing an aging refresh on the space write request.
  • the cache self test subunit 250 is configured to perform initialization on the cache array 300, and The main control unit and each service processing unit report the status information, including the total cache space, the available space, the unavailable space, and the corresponding start and stop addresses.
  • the address mapping sub-unit 260 is configured to identify the space allocation request received by the sub-unit 210 according to the operation. Addressing the cache space by the address mapping of the high speed interface 100 and the cache array 300; the address release subunit 270 is configured to release the cache space according to the space release request received by the operation identification subunit 210; wherein the space release request is requested by the main
  • the control unit sends the space release request to all the service processing units related to the space, and the extension unit 280 is connected to the address mapping sub-unit 260 for expanding the cache array.
  • the cache controller 200 further includes: a shared space allocation sub-unit 291, which is connected to the address mapping sub-unit 260, and configured to allocate a shared space for the intra-group business processing unit according to the shared space allocation request received by the operation identifying sub-unit 210.
  • the operation authority setting subunit 292 is connected to the shared space allocation subunit 291, and is configured to allocate, to the service processing unit, the authority to operate the shared space, the rights include read permission and write permission; and, in the service processing unit After the operation of the shared space, the right allocated to the service processing unit for performing operation on the shared space is reclaimed; the notification subunit 293 is connected to the operation authority setting subunit 292 for obtaining the target recipient address in the group and writing After the operation ends, the target receiver is notified to read the cache space; and, in order to avoid deadlock, a second aging sub-unit 294 is further connected to the shared space allocation sub-unit 291 for periodically refreshing the shared space.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Description

一种共享缓存系统及实现方法和实现软件 技术领域
本发明涉及通信技术领域, 尤其涉及一种共享缓存系统及实现方法和实现软件。 背景技术
现有技术中的数据系统通常分为集中式系统和分布式系统。
如图 1所示的集中式系统, 主控单元和业务处理单元具有各自的内存单元, 用于存储 各自的数据, 业务处理单元分别具有与各自下行设备连接的接口; 业务处理单元通过交换 网的控制通道与主控单元通信, 业务处理单元之间通过交换网的业务通道通信。
如图 2所示的分布式系统, 各业务处理单元通过交换网的业务通道与接口连接, 业务 处理单元、 接口与主控单元通过交换网的控制通道连接。 其中, 业务处理单元包括控制引 擎、 内存单元和流加速引擎。
可以看出, 目前不管是集中式系统还是分布式系统, 内存单元都是分布在各业务处理 单元内部的,而且为相应的业务处理单元所独享,不能为其它业务处理单元提供存储服务。 因此, 具有如下缺陷: 各业务处理单元之间无法直接实现数据共享; 同时, 如果要实现业 务处理单元之间的数据共享,必须通过主控单元进行数据转发;由于不是直接的数据共享, 因此必然产生数据传输的可靠性问题, 进而需要对每一次数据传输进行确认, 如果传送失 败则进行重传, 这必然造成较大系统延时, 从而产生性能瓶颈或者导致一些要求高速、 低 延时的数据业务无法实现。 发明内容
本发明提供了一种共享缓存系统及实现方法,以解决现有技术中业务处理单元之 间无法直接共享数据的缺陷。
本发明提供了一种共享缓存实现方法,应用于包括主控单元和多个业务处理单元 的系统,所述方法包括为所述主控单元和所述多个业务处理单元设置共享缓存, 并执 行以下步骤:
接收并解析对共享缓存的操作请求;
对于对共享缓存中一空间进行数据写入的各操作请求,对该空间进行各请求互斥 写入数据, 实现缓存互斥共享;
对于对共享缓存中一空间进行数据读出的各操作请求,对该空间进行各请求同时 读出数据, 实现缓存同时共享。
其中, 该进行各请求互斥写入数据, 实现缓存空间互斥共享的步骤具体包括: 对 所述各要求写入的请求按照预设顺序排队; 当一写入请求对所述空间进行写入时,禁 止其他请求对同一空间进行写入或读出; 当前一写入请求写入结束后,允许后续请求 对所述空间进行写入或读出的操作。
该禁止其他请求对同一空间进行写入或读出包括:对所述空间设置写标志位进行 标志,并在写入结束后释放或更改所述写标志位, 以允许后续请求对所述空间进行写 入或读出。
该对共享缓存中一空间进行各请求互斥写入数据的步骤中,还包括接收到对所述 空间的写入请求后,如果超过预设时间未接收到写入数据则返回写失败信息,进行其 他请求的写入或读出。
该进行各请求同时读出数据, 实现缓存空间同时共享的步骤具体包括: 同时按照 所述各读出请求对所述空间的数据进行读出, 且禁止其他请求对同一空间进行写入; 当读出请求读出结束后, 允许后续请求对所述空间进行写入的操作。
该禁止其他请求对同一空间进行写入包括: 对所述空间设置读标志位进行标志, 并在读出结束后释放或更改所述读标志位, 以允许后续请求对所述空间进行写入。
为所述主控单元和所述多个业务处理单元设置共享缓存之后还包括共享缓存初 始化的步骤, 具体包括: 所述共享缓存进行自检, 在自检完成后向所述主控单元和各 业务处理单元上报状态信息, 包括缓存总空间、可用空间、不可用空间以及相应的起 止地址。
该接收并解析对共享缓存的操作请求包括空间分配请求、空间操作请求、空间释 放请求和共享空间分配请求。
该接收并解析空间分配请求之后包括: 根据所述请求进行空间分配, 并执行所述 空间的初始化; 其中, 所述空间分配请求通过以下步骤下发给共享缓存: 所述各业务 单元识别某一业务是否与己有空间相关;是则向共享缓存下发空间操作请求,否则向 共享缓存下发空间分配请求。
该接收并解析空间操作请求包括各业务处理单元的空间读操作请求和空伺写操 作请求。
该接收并解析空间释放请求之后包括: 根据所述请求进行空间释放; 其中, 所述 空间释放请求通过以下步骤下发给共享缓存:各业务处理单元将空间释放请求上报给 主控单元; 主控单元识别是否所述空间相关的全部业务处理单元都提出空间释放请 求,是则向所述共享缓存下发空间释放请求,否则保持各业务处理单元空间释放请求 的监控。
该接收并解析共享空间分配请求之后包括: 分配共享空间,并为组内业务处理单 元分配对所述共享空间进行操作的权限,所述权限包括读权限和写权限;请求共享空 间的业务处理单元获取读 /写权限, 进行所述共享空间的写入, 同时写入组内目标接 收方地址; 写入完成, 释放所述业务处理单元的读 /写权限, 并通知所述目标接收方 进行所述共享空间的读取。
该分配共享空间之后还包括:如果超过预定时间没有被访问, 则释放所述共享空 间。
该分配共享空间之后还包括:根据请求共享空间的业务处理单元的释放请求迸行 所述共享空间的释放。
本发明还提供了一种共享缓存实现软件,应用于包括主控单元和多个业务处理单 元的系统,且所述主控单元和所述多个业务处理单元与共享缓存连接; 该共享缓存实 现软件执行以下步骤:
接收并解析对共享缓存的操作请求;
对于对共享缓存中一空间进行数据写入的各操作请求,对该空间进行各请求互斥 写入数据, 实现缓存互斥共享;
对于对共享缓存中一空间进行数据读出的各操作请求,对该空间进行各请求同时 读出数据, 实现缓存同时共享。
本发明还提供了一种共享缓存系统,包括主控单元和多个业务处理单元,还包括 共享缓存单元, 分别与所述主控单元和业务处理单元连接, 用于实现缓存共享; 所述 共享缓存单元具体包括:
高速接口,分别与所述主控单元和所述多个业务处理单元连接,用于接收对所述 共享缓存单元的各种操作请求,转发在所述业务处理单元与所述共享缓存单元之间传 输的数据;
高速缓存阵列, 用于提供缓存空间, 高速地存储数据;
缓存控制器,连接于所述高速接口和所述高速缓存阵列之间,用于根据所述各种 操作请求对高速缓存阵列进行数据的互斥写入、 同时读出, 实现缓存共享。
该缓存控制器具体包括: 操作识别子单元, 用于解析对共享缓存的操作请求; 写 入控制子单元,用于对各要求写入的请求按照预设顺序排队, 当一写入请求对所述空 间进行写入时, 禁止其他请求对同一空间进行写入或读出, 以及, 当前一写入请求写 入结束后, 允许后续请求对所述空间进行写入或读出的操作; 读出控制子单元, 用于 同时按照各读出请求对所述空间的数据进行读出,且禁止其他请求对同一空间进行写 入, 以及, 当读出请求读出结束后, 允许后续请求对所述空间进行写入的操作。
该缓存控制器还包括: 第一老化子单元, 与所述写入控制子单元连接, 用于对空 间的写入请求进行老化刷新。
该缓存控制器还包括: 缓存自检子单元, 用于对所述高速缓存阵列执行初始化, 并向所述主控单元和各业务处理单元上报状态信息, 包括缓存总空间、可用空间、不 可用空间以及相应的起止地址。 该缓存控制器还包括:地址映射子单元, 用于根据操作识别子单元接收到的空间 分配请求, 对高速接口与所述高速缓存阵列进行地址映射, 分配缓存空间; 地址释放 子单元, 用于根据操作识别子单元接收到的空间释放请求, 释放缓存空间; 其中, 所 述空间释放请求由主控单元在与所述空间相关的全部业务处理单元都提出空间释放 请求的情况下, 向所述共享缓存下发。
该缓存控制器还包括: 扩展子单元, 与所述地址映射子单元连接, 用于扩展对高 速缓存阵列中缓存地址的寻址空间。
该缓存控制器还包括: 共享空间分配子单元, 与所述地址映射子单元连接, 用于 根据操作识别子单元接收到的共享空间分配请求, 为组内业务处理单元分配共享空 间; 操作权限设置子单元, 与所述共享空间分配子单元连接, 用于为业务处理单元分 配对所述共享空间进行操作的权限, 所述权限包括读权限和写权限; 以及, 在业务处 理单元对所述共享空间进行操作之后收回分配给所述业务处理单元对共享空间进行 操作的权限; 通知子单元, 与所述操作权限设置子单元连接, 用于获得组内目标接收 方地址且写操作结束后, 通知所述目标接收方读所述缓存空间。
该缓存控制器还包括: 第二老化子单元, 与所述共享空间分配子单元连接, 用于 定时刷新所述共享空间。
该共享缓存系统为分布式系统或集中式系统。
与现有技术相比, 本发明实施例具有以下优点: 本发明的实施例中, 通过为主控 单元和各业务处理单元设置共享缓存,而且在缓存中提供互斥功能,保证了各业务处 理单元所处理的数据一致性;进一步的, 能够通过缓存共享空间的分配解决高速数据 共享问题, 极大的提升了系统整体性能。 附图说明
图 1 是现有技术中集中式系统中独享内存分布结构示意图;
图 2 是现有技术中分布式系统中独享内存分布结构示意图;
图 3 是本发明中使用共享缓存单元的集中式系统结构图;
图 4是本发明中使用共享缓存单元的分布式系统结构图;
图 5是本发明中共享缓存系统应用于攻击统计的流程图;
图 6是本发明中共享缓存系统初始化流程图;
图 7是本发明中共享缓存系统应用于业务处理单元共享数据的流程图;
图 8是本发明中共享缓存单元内部互斥机制实现流程图;
图 9是本发明中共享缓存单元结构图。 具体实施方式 下面将详细描述本发明的具体实施例。应当注意,这里描述的实施例只用于举例说明, 并不用于限制本发明。
本发明实施例中使用共享 cache (缓存) 单元的集中式系统如图 3所示, 使用共 享缓存单元的分布式系统如图 4所示。其中共享缓存单元具体包括: 高速接口、缓存 控制器和高速缓存阵列。 高速接口必须基于可靠连接, 例如 PCIE ( Peripheral Component Interconnect Express,外部器件互连扩展)、 HT ( Hypertransport,超线程)、 Rapid IO (快速输入输出) 等, 从底层保证共享缓存单元和业务处理单元之间数据传 送的可靠性。缓存控制器是共享缓存单元的核心模块,作为高速接口和高速缓存阵列 之间的通道, 主要功能包括: 实现高速接口和高速缓存阵列之间的地址映射、扩展寻 址空间, 可以随意扩展缓存的容量; 且当各业务处理单元访问同一缓存地址时, 实现 互斥功能,保证缓存数据的一致性;并且可以提供定时器,通过每个定时器时间配置, 提供缓存自动老化功能。 高速缓存阵列用于存储控制单元和业务处理单元的数据。
本发明共享缓存系统可以灵活实现各种功能。作为共享缓存实现方法一个具体实 施例,其首先为主控单元和多个业务处理单元设置共享缓存,可以为各业务处理单元 提供统一的存储空间,从而保证不同业务处理单元所处理业务的数据一致性,下面举 例说明这种方法的实现流程:
当进行相关新建连接的建链统计时,所有的业务处理单元或部分业务处理单元需 要写同一共享缓存空间, 这样需要在缓存控制器中实现互斥机制。
共享缓存单元接收并解析各业务处理单元和主控单元对共享缓存的操作请求,对 于对共享缓存中一空间进行数据写入的各操作请求,对所述空间进行各请求互斥写入 数据, 实现缓存互斥共享; 且对于对共享缓存中一空间进行数据读出的各操作请求, 对所述空间进行各请求同时读出数据, 实现缓存同时共享。
其中, 所述进行各请求互斥写入数据, 实现缓存空间互斥共享的步骤具体包括: 对所述各要求写入的请求按照预设顺序排队; 当一写入请求对所述空间进行写入时, 禁止其他请求对同一空间进行写入或读出; 当前一写入请求写入结束后,允许后续请 求对所述空间进行写入或读出的操作。
其中,该禁止其他请求对同一空间进行写入或读出的步骤包括: 对所述空间设置 写标志位进行标志,并在写入结束后释放或更改该写标志位, 以允许后续请求对该空 间进行写入或读出。
特别是,在对共享缓存中一空间进行各请求互斥写入数据的步骤中,还包括接收 到对所述空间的写入请求后, 如果超过预设时间未接收到写入数据则返回写失败信 息, 进行其他请求的写入或读出。
进行各请求同时读出数据, 实现缓存空间同时共享的步骤具体包括: 同时按照所 述各读出请求对该空间的数据进行读出,且禁止其他请求对同一空间进行写入; 当读 出请求读出结束后, 允许后续请求对所述空间进行写入的操作。
其中, 该禁止其他请求对同一空间进行写入的步骤包括: 对所述空间设置读标志 位进行标志, 此时不允许其他请求的写入, 但允许其他请求的读出; 并在读出结束后 释放或更改该读标志位, 以允许后续请求对该空间进行写入。
可以看出, 本发明通过识别空间操作请求, 进行空间互斥写入和同时读出, 可以 保证各业务处理单元操作数据的一致性。
此外, 接收并解析对共享缓存的操作请求还包括空间分配请求和空间释放请求。 其中,接收并解析空间分配请求之后执行空间分配,以保证进一步的写入和读出; 具体包括: 根据所述请求进行空间分配, 并执行所述空间的初始化; 其中, 所述空间 分配请求通过以下歩骤下发给共享缓存:所述各业务单元识别某一业务是否与已有空 间相关; 是则向共享缓存下发空间操作请求进行读写, 否则向共享缓存下发空间分配 请求。
接收并解析空间释放请求之后执行空间释放, 以使保证对后续的请求的空间分 配。 具体包括: 根据所述请求进行空间释放; 其中, 所述空间释放请求通过以下步骤 下发给共享缓存: 各业务处理单元将空间释放请求上报给主控单元; 主控单元识别是 否所述空间相关的全部业务处理单元都提出空间释放请求,是则向所述共享缓存下发 空间释放请求, 否则保持各业务处理单元空间释放请求的监控。
上述共享空间实现方法的一个应用实例如图 5所示,是本发明基于攻击统计应用 实例, 包括以下步骤:
步骤 s501, 业务处理单元启动基于流的统计。
步骤 s502, 流从接口单元进入业务处理单元。
步骤 s503, 业务处理单元判断该流是否命中 session (会话) 表, 即通过流中的 某些标识与预先存储的会话表中的参数比较确定是否命中, 如果命中, 说明为正常报 文, 则转步骤 S51 1 ; 如果没命中, 则有可能为攻击报文, 转步骤 s504, 进一步确定 是否为攻击报文。
步骤 s504, 业务处理单元建立新连接, 并判断新建连接是否结束, 如果结束, 则说明为正常报文, 则转步骤 S512 ; 如果没有结束, 则证明该流为攻击流, 转步骤 s505。
上述步骤 s501到 s505是确定某一流是否为攻击流的步骤, 确定为攻击流后, 则 执行步骤 S505到 s51 1, 对攻击流的参数进行统计, 存储到共享缓存单元中。
步骤 s505, 业务处理单元查询缓存是否已分配该连接相关的空间, 如果是, 则 转步骤 s510 ; 如果不是, 则转步骤 s506。
步骤 s506, 业务处理单元为该连接申请缓存空间。
步骤 s507, 业务处理单元判断是否存在足够的缓存空间, 如果不存在, 则转步 骤 s528 ; 如果存在, 则转步骤 s508。
步骤 s508, 业务处理单元为该连接分配缓存空间, 该空间包括缓存的起始地址 和地址长度。
步骤 s509, 业务处理单元初始化该缓存空间, 即将该缓存空间清零。
步骤 s510, 业务处理单元基于各种统计的计数写入分配的共享缓存空间, 转步 骤 s51 8。
步骤 s51 1, 说明已建立连接, 业务处理单元执行该会话操作。
步骤 s512, 业务处理单元上报主控单元, 并转步骤 s51 3。
步骤 s513, 主控单元检测是否所有业务处理单元与该连接相关新建连接都已经 结束, 如果不是, 则继续检测; 如果是, 则转步骤 s514。
步骤 s514, 主控单元发送释放命令到共享缓存单元。
步骤 s515, 共享缓存单元接收到释放命令及需要释放的地址。
步骤 s516, 共享缓存单元释放相应地址的缓存, 可供再分配使用。
步骤 s517, 共享缓存单元向主控单元返回释放成功信息。
上述步骤 s512到 s517的功能是确定没有攻击报文后,才将相应的共享缓存释放, 以便对其他数据进行存储。
步骤 s518, 共享缓存单元收到来自业务处理单元的写命令、 待写数据及地址。 步骤 s519, 共享缓存单元启动写操作定时器。
步骤 s520,共享缓存单元判断该地址标志是否可以写,如果可以,则转步骤 S522 ; 如果不可以, 则转步骤 s521。
步骤 s521, 共享缓存单元判断定时器是否超时, 如果没有, 则继续转步骤 s520 ; 如果超时, 则转步骤 s527。
步骤 s522, 共享缓存单元将该地址标志置为不可写。
步骤 s523, 共享缓存单元释放定时器。
步骤 s524, 共享缓存单元读出该地址中原有的数据与待写的数据相加, 在将相 加的和写入该地址空间。
步骤 s525, 共享缓存单元将该地址标志置为可写。
步骤 s526, 共享缓存单元向业务处理单元返回写成功信息。
步骤 s527, 共享缓存单元释放定时器。
步骤 s528 , 共享缓存单元向业务处理单元返回写失败信息。
上述步骤 s51 8到 s528是如何将统计数据写到共享缓存单元的过程, 其中, 具体 说明了如何通过使用本发明中的互斥机制保持数据操作一致性。
该实施例中, 步骤 s501至 s512是业务处理单元中处理流程, 步骤 S5 I 3至 s514 是主控单元中处理流程, 步骤 s515至 s528是共享缓存单元中处理流程。 首先在新建 连接时, 为该连接分配共享缓存空间; 主控单元和所有业务处理单元可以独立申请, 但建链完毕, 对应该连接的缓存空间的释放, 只能由主控单元进行。
在本发明中, 对应每一个已分配的缓存空间设有一个可写标志位, 当该标志位为 忙的时候, 说明有其它单元正在操作该缓存空间, 需要等待, 保证数据的一致性。 但 是读操作的时候, 不需要互斥, 可以多个单元同时读, 保证读取数据的速率, 加快数 据处理的实时性。 在统计之前还需要进行系统初始化, 如图 6所示, 包括以下步骤: s601, 系统启动, 进行初始化。
s602, 共享缓存单元进行自检。
s603, 共享缓存单元向主控单元和各业务处理单元上报状态信息。 该状态信息包 括: 缓存总容量, 起止地址; 可用的缓存容量, 起止地址; 不可用的缓存容量, 起止 地址。 上报状态信息后初始化结束。
通过上述实施例, 本发明提供的共享缓存实现方法通过设置共享缓存, 保证了操 作数据的一致性。进一步的,本发明提供的共享缓存实现方法还能够保证共享数据在 各业务处理单元之间的高速交换, 实现高速数据的共享。
具体的, 是共享缓存单元接收并解析共享空间分配请求之后执行:
分配共享空间, 并为组内业务处理单元分配对所述共享空间进行操作的权限, 所 述权限包括读权限和写权限;
请求共享空间的业务处理单元获取读 /写权限, 进行所述共享空间的写入, 同时 写入组内目标接收方地址;
写入完成, 释放所述业务处理单元的读 /写权限, 并通知所述目标接收方进行所 述共享空间的读取。
其中,为了避免死锁,分配共享空间之后还包括:如果超过预定时间没有被访问, 则释放所述共享空间。
以及, 为了保证空间利用, 分配共享空间之后还包括: 根据请求共享空间的业务 处理单元的释放请求进行所述共享空间的释放。
请参考图 7, 是上述数据高速共享的应用实现流程。 缓存共享单元使用不是固定 的, 而是根据需要的进行申请, 如业务处理单元 1需要发起和业务处理单元 3、 4的 数据访问, 定义好需要的缓存空间的大小, 交互的数据格式, 向主控单元申请。 实现 过程包括以下步骤:
步骤 s701, 共享缓存申请, 假设业务处理单元 1、 3、 4需要进行高速数据交互, 业务处理单元向主控单元发送申请消息, 该申请消息中包括: 该共享缓存组内成员, 如业务处理单元 1、 3、 4; 该共享缓存大小; 交互的数据格式。
步骤 S702 , 主控单元接收到申请消息, 査询缓存单元是否有足够空间, 如果有, 则转步骤 S704 ; 如果没有, 则转步骤 s703。 步骤 s703, 返回失败消息给业务处理单元 I , 并发送告警信息。
步骤 s704, 共享缓存单元分配缓存基址及大小; 并建立业务处理单元 1、 3、 4 权限标志表, 初始化为无读写权限。
步骤 s705, 共享缓存单元返回消息给主控单元, 该返回消息包括共享缓存基址 及大小, 该共享缓存组权限标志表的地址。
上述步骤 s701 至 S705 说明发起共享缓存操作的业务处理单元获得对应的缓存 空间的过程。
步骤 s706, 主控单元向业务处理单元 3、 4发送消息, 该消息包括: 该共享缓存 组内成员为业务处理单元 1、 3、 4; 该共享缓存基址及大小; 该共享缓存组权限标志 表的地址; 交互的数据格式。
步骤 s707, 业务处理单元 3、 4是否收到, 如果没有, 则转步骤 s706, 通知主控 单元重新发送消息; 如果收到, 则转步骤 s708。
上述步骤 s706至 s707说明组内其他业务处理单元获得对应的缓存空间的过程。 步骤 s708, 主控单元返回消息给业务处理单元 1, 该消息包括: 该共享缓存基址 及大小; 该共享缓存组权限标志表的地址。
步骤 s709, 业务处理单元 1 是否收到, 如果没有, 则转步骤 s708, 通知主控单 元重新发送消息; 如果收到, 则转步骤 s810。
步骤 s710, 业务处理单元 1、 3、 4开始数据交互。
步骤 s71 1, 业务处理单元 1获取已分配的缓存空间的读写权限。
步骤 s712, 业务处理单元 1写入到已分配的缓存空间。
步骤 s713, 业务处理单元 1释放读写的权限。
上述步骤 s708 至 s713 说明组内发起共享缓存的业务处理单元对共享缓存单元 进行读写操作的过程。
步骤 s714, 共享缓存单元通知目标业务处理单元, 例如该数据是共享给组内业 务处理单元 3的, 共享缓存单元发消息给业务处理单元 3, 通知业务处理单元 3, 缓 存空间中有缓存单元 1共享给他的数据。数据也可以同时共享给组内业务处理单元 3、 4, 这样缓存控制单元会同时发消息给业务处理单元 3、 4, 业务处理单元 3、 4分别 获取权限后, 读取数据。
步骤 s715, 业务处理单元 3获取缓存空间的读写权限。
步骤 s716, 业务处理单元 3读取缓存空间的数据。
步骤 s717, 业务处理单元 3释放该缓存空间读写权限。
上述步骤 s714至 s717说明组内其他业务处理单元共享所述共享缓存单元中数 据的过程。
图 7中步骤 s702、 s703、 s706、 s708为主控单元处理流程; 步骤 s704、 s705、 s814 为共享缓存单元处理流程; 其它为业务处理单元处理流程。
在这种方案中, 同一个业务处理单元, 允许申请多个缓存空间, 与不同的业务处 理单元进行数据交互。 例如, 业务处理单元申请成功与业务处理单元 3、 4 缓存空间 以后, 还可以申请与业务处理单元 2、 5的缓存共享空间, 甚至同一组内 (业务处理 单元 1、 3、 4 和 业务处理单元 1、 2、 5 ) , 可以申请多个缓存空间, 用于不同类型 数据的交互。
由于共享缓存的成员超过两个,所以在业务处理单元 1将数据写到已分配的缓存 空间的时候, 必须要写组内目标接收方, 是业务处理单元 3还是业务处理单元 4, 或 者是同时共享给业务处理单元 3、 4, 在业务处理单元 1 将数据写完毕, 释放缓存空 间的读写权限以后, 需要有缓存控制器发送消息给接收方, 而不是采用轮询的方式, 进一步提升数据交互的效率。
共享缓存空间使用完毕, 遵循谁申请, 谁释放的原则, 例如, 业务处理单元申请 与业务处理单元 3、 4的缓存共享空间, 使用完毕后, 由业务处理单元 1向主控单元 发起释放消息。 主控单元收到后, 向共享该缓存空间的其它单元发释放命令, 同时要 求的共享缓存单元释放该空间。共享缓存单元自己维护每一个已分配的缓存空间, 超 过一定时间, 没有访问, 将该空间老化回收, 同时通知使用该缓存空间的各业务处理 单元和主控单元。
当然, 上述共享空间同样遵循互斥写入和同时读出的机制, 具体如图 8所示, 包 括以下步骤- 步骤 s801, 启动共享缓存互斥机制。 为每一个共享该缓存空间的业务处理单元 设定了读写的标志 (流程中举例 0x55无读写权限, Oxaa有读写权限, 如表 1所示, 实际应用中, 读写的权限设置值可以随意设置) , 要进行读写操作, 必须要先获得读 写权限, 以保证缓存中数据的一致性。 读写完毕, 必须要释放读写的权限, 否则要造 成死锁, 数据无法共享。
表 1 :
Figure imgf000012_0001
步骤 s802, 初始化缓存。
步骤 s803, 所有共享缓存区域读写权限默认为 0x55。
步骤 s804, 业务处理单元 1希望写共享缓存区域。
步骤 s805, 写业务处理单元 1的权限标志为 0xaa。
步骤 s806, 共享缓存单元判断该组内其它业务处理单元是否有读写权限 =0xaa, 如果是则转步骤 s508 ; 如果不是, 则转步骤 s807。
步骤 s807, 业务处理单元 1 的权限标志置为 0xaa, 转步骤 s809。 步骤 s808, 业务处理单元 1的权限标志置为 0x55, 转步骤 s809。
步骤 s809, 读业务处理单元 1的权限标志。
步骤 s810,判断是否为 0xaa,如果是,则转步骤 s81 1 ;如果不是,则转步骤 s805。 步骤 s81 1, 业务处理单元 1获得读写权限, 可以读写该共享缓存区域。
步骤 s812, 业务处理单元 1读写完毕, 权限标志设置为 0x55, 释放读写权限, 以免造成死锁。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到本发明可借助 软件加必需的通用硬件平台的方式来实现, 当然也可以通过硬件,但很多情况下前者 是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做 出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介 质中, 包括若干指令用以使得一台计算机设备(可以是个人计算机, 服务器, 或者网 络设备等) 执行本发明各个实施例的方法。
因此, 本发明还提供了一种共享缓存实现软件, 应用于包括主控单元和多个业务 处理单元的系统, 且所述主控单元和所述多个业务处理单元与共享缓存连接; 该共享 缓存实现软件执行以下步骤:
接收并解析对共享缓存的操作请求;
对于对共享缓存中一空间进行数据写入的各操作请求,对该空间进行各请求互斥 写入数据, 实现缓存互斥共享;
对于对共享缓存中一空间进行数据读出的各操作请求,对该空间进行各请求同时 读出数据, 实现缓存同时共享。
本发明提供了一种共享缓存系统, 包括主控单元和多个业务处理单元, 还包括共 享缓存单元, 分别与所述主控单元和业务处理单元连接, 用于实现缓存共享, 如图 4 和图 5所示。
该缓存共享单元如图 9所示, 具体包括: 高速接口 100, 分别与主控单元和多个 业务处理单元连接,用于接收所述多个业务处理单元和主控单元发送给所述共享缓存 单元的各种操作请求, 转发在业务处理单元与共享缓存单元之间传输的数据; 高速缓 存阵列 300, 用于提供缓存空间, 存储高速数据; 缓存控制器 200, 连接于高速接口 100和高速缓存阵列 300之间, 用于实现缓存共享。
其中缓存控制器 200具体包括: 操作识别子单元 210, 用于解析对共享缓存的操作请 求; 写入控制子单元 220, 用于对各要求写入的请求按照预设顺序排队, 当一写入请求对 所述空间进行写入时, 禁止其他请求对同一空间进行写入或读出, 以及, 当前一写入请求 写入结束后, 允许后续请求对所述空间进行写入或读出的操作; 读出控制子单元 230, 用 于同时按照各读出请求对所述空间的数据进行读出, 且禁止其他请求对同一空间进行写 入, 以及, 当读出请求读出结束后, 允许后续请求对所述空间进行写入的操作; 第一老化 子单元 240, 与所述写入控制子单元 220连接, 用于对空间的写入请求进行老化刷新; 缓 存自检子单元 250, 用于对所述高速缓存阵列 300执行初始化, 并向所述主控单元和各业 务处理单元上报状态信息,包括缓存总空间、可用空间、不可用空间以及相应的起止地址; 地址映射子单元 260, 用于根据操作识别子单元 210接收到的空间分配请求, 对高速接口 100与高速缓存阵列 300进行地址映射分配缓存空间; 地址释放子单元 270, 用于根据操 作识别子单元 210接收到的空间释放请求, 释放缓存空间; 其中, 所述空间释放请求由主 控单元在与所述空间相关的全部业务处理单元都提出空间释放请求的情况下,向所述共享 缓存下发; 扩展子单元 280, 与地址映射子单元 260连接, 用于扩展对高速缓存阵列 300 中缓存地址的寻址空间。
通过上述装置, 可扩展的满足缓存共享的要求, 同时能够保证缓存数据的一致性。 进一步的,缓存控制器 200还包括:共享空间分配子单元 291,与地址映射子单元 260 连接, 用于根据操作识别子单元 210接收到的共享空间分配请求, 为组内业务处理单元分 配共享空间; 操作权限设置子单元 292, 与共享空间分配子单元 291连接, 用于为业务处 理单元分配对所述共享空间进行操作的权限, 所述权限包括读权限和写权限; 以及, 在业 务处理单元对所述共享空间进行操作之后收回分配给所述业务处理单元对共享空间迸行 操作的权限; 通知子单元 293, 与操作权限设置子单元 292连接, 用于获得组内目标接收 方地址且写操作结束后, 通知所述目标接收方读所述缓存空间; 以及, 为了避免死锁, 还 包括第二老化子单元 294,与共享空间分配子单元 291连接,用于定时刷新所述共享空间。 虽然已参照几个典型实施例描述了本发明,但应当理解,所用的术语是说明和示例性、 而非限制性的术语。 由于本发明能够以多种形式具体实施而不脱离发明的精神或实质,所 以应当理解, 上述实施例不限于任何前述的细节, 而应在随附权利要求所限定的精神和范 围内广泛地解释,因此落入权利要求或其等效范围内的全部变化和改型都应为随附权利要 求所涵盖。

Claims

权利要求
1、 一种共享缓存实现方法, 应用于包括主控单元和多个业务处理单元的系统, 其特征在于, 所述方法包括为所述主控单元和所述多个业务处理单元设置共享缓存, 并执行以下步骤:
接收并解析对共享缓存的操作请求;
对于对共享缓存中一空间进行数据写入的各操作请求, 对所述空间进行各请求 互斥写入数据, 实现缓存互斥共享;
对于对共享缓存中一空间进行数据读出的各操作请求, 对所述空间进行各请求 同时读出数据, 实现缓存同时共享。
2、 如权利要求 1所述共享缓存实现方法, 其特征在于, 所述进行各请求互斥写 入数据, 实现缓存空间互斥共享的步骤具体包括:
对所述各要求写入的请求按照预设顺序排队;
当一写入请求对所述空间进行写入时, 禁止其他请求对同一空间进行写入或读 出; 当前一写入请求写入结束后, 允许后续请求对所述空间进行写入或读出的操作。
3、 如权利要求 2所述共享缓存实现方法, 其特征在于, 所述禁止其他请求对同 一空间进行写入或读出包括: 对所述空间设置写标志位进行标志, 并在写入结束后 释放或更改所述写标志位, 以允许后续请求对所述空间进行写入或读出。
4、 如权利要求 2所述共享缓存实现方法, 其特征在于, 所述对共享缓存中一空 间进行各请求互斥写入数据的步骤中, 还包括接收到对所述空间的写入请求后, 如 果超过预设时间未接收到写入数据则返回写失败信息, 进行其他请求的写入或读出。
5、 如权利要求 1所述共享缓存实现方法, 其特征在于, 所述进行各请求同时读 出数据, 实现缓存空间同时共享的步骤具体包括: 同时按照所述各读出请求对所述 空间的数据进行读出, 且禁止其他请求对同一空间进行写入;
当读出请求读出结束后, 允许后续请求对所述空间进行写入的操作。
6、 如权利要求 5所述共享缓存实现方法, 其特征在于, 所述禁止其他请求对同 一空间进行写入包括: 对所述空间设置读标志位进行标志, 并在读出结束后释放或 更改所述读标志位, 以允许后续请求对所述空间进行写入。
7、 如权利要求 1 -6任一所述的共享缓存实现方法, 其特征在于, 为所述主控单 元和所述多个业务处理单元设置共享缓存之后还包括共享缓存初始化的步骤, 具体 包括:
所述共享缓存进行自检, 在自检完成后向所述主控单元和各业务处理单元上报 状态信息, 包括缓存总空间、 可用空间、 不可用空间以及相应的起止地址。
8、 如权利要求 1 -6任一所述的共享缓存实现方法, 其特征在于, 所述接收并解 析对共享缓存的操作请求包括空间分配请求、 空间操作请求、 空间释放请求和共享 空间分配请求。
9、 如权利要求 8所述的共享缓存实现方法, 其特征在于, 所述接收并解析空间 分配请求之后包括: 根据所述请求进行空间分配, 并执行所述空间的初始化; 其中, 所述空间分配请求通过以下步骤下发给共享缓存: 所述各业务单元识别某一业务是 否与已有空间相关; 是则向共享缓存下发空间操作请求, 否则向共享缓存下发空间 分配请求。
10、 如权利要求 8所述的共享缓存实现方法, 其特征在于, 所述接收并解析空 间操作请求包括各业务处理单元的空间读操作请求和空间写操作请求。
1 1、 如权利要求 8所述的共享缓存实现方法, 其特征在于, 所述接收并解析空 间释放请求之后包括: 根据所述请求进行空间释放; 其中, 所述空间释放请求通过 以下步骤下发给共享缓存: 各业务处理单元将空间释放请求上报给主控单元; 主控 单元识别是否所述空间相关的全部业务处理单元都提出空间释放请求, 是则向所述 共享缓存下发空间释放请求, 否则保持各业务处理单元空间释放请求的监控。
12、 如权利要求 8所述的共享缓存实现方法, 其特征在于, 所述接收并解析共 享空间分配请求之后包括:
分配共享空间, 并为组内业务处理单元分配对所述共享空间进行操作的权限, 所述权限包括读权限和写权限;
请求共享空间的业务处理单元获取读 /写权限, 进行所述共享空间的写入, 同时 写入组内目标接收方地址;
写入完成, 释放所述业务处理单元的读 /写权限, 并通知所述目标接收方进行所 述共享空间的读取。
13、 如权利要求 12所述的共享缓存实现方法, 其特征在于, 所述分配共享空间 之后还包括: 如果超过预定时间没有被访问, 则释放所述共享空间。
14、 如权利要求 12所述的共享缓存实现方法, 其特征在于, 所述分配共享空间 之后还包括: 根据请求共享空间的业务处理单元的释放请求进行所述共享空间的释 放。
15、 一种共享缓存系统, 包括主控单元和多个业务处理单元, 其特征在于, 还 包括共享缓存单元, 分别与所述主控单元和业务处理单元连接, 用于实现所述缓存 共享; 所述共享缓存单元具体包括:
高速接口, 分别与所述主控单元和所述多个业务处理单元连接, 用于接收对所 述共享缓存单元的各种操作请求, 转发在所述业务处理单元与所述共享缓存单元之 间传输的数据;
高速缓存阵列, 用于提供缓存空间, 高速地存储数据; 缓存控制器, 连接于所述高速接口和所述高速缓存阵列之间, 用于根据所述各 种操作请求对高速缓存阵列进行数据的互斥写入、 同时读出, 实现缓存共享。
16、如权利要求 15所述共享缓存系统,其特征在于,所述缓存控制器具体包括: 操作识别子单元, 用于解析对共享缓存的操作请求;
写入控制子单元, 用于对各要求写入的请求按照预设顺序排队, 当一写入请求 对所述空间进行写入时, 禁止其他请求对同一空间进行写入或读出, 以及, 当前一 写入请求写入结束后, 允许后续请求对所述空间进行写入或读出的操作;
读出控制子单元, 用于同时按照各读出请求对所述空间的数据进行读出, 且禁 止其他请求对同一空间进行写入, 以及, 当读出请求读出结束后, 允许后续请求对 所述空间进行写入的操作。
17、 如权利要求 16所述共享缓存系统, 其特征在于, 所述缓存控制器还包括: 第一老化子单元, 与所述写入控制子单元连接, 用于对空间的写入请求进行老 化刷新。
18、 如权利要求 16所述共享缓存系统, 其特征在于, 所述缓存控制器还包括: 缓存自检子单元, 用于对所述高速缓存阵列执行初始化, 并向所述主控单元和 各业务处理单元上报状态信息, 包括缓存总空间、 可用空间、 不可用空间以及相应 的起止地址。
19、 如权利要求 16所述共享缓存系统, 其特征在于, 所述缓存控制器还包括: 地址映射子单元, 用于根据操作识别子单元接收到的空间分配请求, 对高速接 口与所述高速缓存阵列进行地址映射, 分配缓存空间;
地址释放子单元, 用于根据操作识别子单元接收到的空间释放请求, 释放缓存 空间; 其中, 所述空间释放请求由主控单元在与所述空间相关的全部业务处理单元 都提出空间释放请求的情况下, 向所述共享缓存下发。
20、 如权利要求 19所述共享缓存系统, 其特征在于, 所述缓存控制器还包括: 扩展子单元, 与所述地址映射子单元连接, 用于扩展对高速缓存阵列中缓存地 址的寻址空间。
21、 如权利要求 20所述共享缓存系统, 其特征在于, 所述缓存控制器还包括: 共享空间分配子单元, 与所述地址映射子单元连接, 用于根据操作识别子单元 接收到的共享空间分配请求, 为组内业务处理单元分配共享空间;
操作权限设置子单元, 与所述共享空间分配子单元连接, 用于为业务处理单元 分配对所述共享空间进行操作的权限, 所述权限包括读权限和写权限; 以及, 在业 务处理单元对所述共享空间进行操作之后收回分配给所述业务处理单元对共享空间 进行操作的权限;
通知子单元, 与所述操作权限设置子单元连接, 用于获得组内目标接收方地址 且写操作结束后, 通知所述目标接收方读所述缓存空间。
22、 如权利要求 21所述共享缓存系统, 其特征在于, 所述缓存控制器还包括: 第二老化子单元, 与所述共享空间分配子单元连接, 用于定时刷新所述共享空 间。
23、 如权利要求 15至 22中任一项所述共享缓存系统, 其特征在于, 所述共享 缓存系统为分布式系统或集中式系统。
24、 一种共享缓存实现软件, 应用于包括主控单元和多个业务处理单元的系统, 且所述主控单元和所述多个业务处理单元与共享缓存连接; 其特征在于, 所述共享 缓存实现软件执行以下步骤:
接收并解析对共享缓存的操作请求;
对于对共享缓存中一空间进行数据写入的各操作请求, 对该空间进行各请求互 斥写入数据, 实现缓存互斥共享;
对于对共享缓存中一空间进行数据读出的各操作请求, 对该空间进行各请求同 时读出数据, 实现缓存同时共享。
PCT/CN2008/001146 2007-08-01 2008-06-13 Système à mémoire cache partagée, son procédé de mise en œuvre et son logiciel de mise en œuvre WO2009015549A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/697,376 US20100138612A1 (en) 2007-08-01 2010-02-01 System and method for implementing cache sharing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CNB2007101415505A CN100489814C (zh) 2007-08-01 2007-08-01 一种共享缓存系统及实现方法
CN200710141550.5 2007-08-01

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/697,376 Continuation US20100138612A1 (en) 2007-08-01 2010-02-01 System and method for implementing cache sharing

Publications (1)

Publication Number Publication Date
WO2009015549A1 true WO2009015549A1 (fr) 2009-02-05

Family

ID=38943193

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2008/001146 WO2009015549A1 (fr) 2007-08-01 2008-06-13 Système à mémoire cache partagée, son procédé de mise en œuvre et son logiciel de mise en œuvre

Country Status (3)

Country Link
US (1) US20100138612A1 (zh)
CN (1) CN100489814C (zh)
WO (1) WO2009015549A1 (zh)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100489814C (zh) * 2007-08-01 2009-05-20 杭州华三通信技术有限公司 一种共享缓存系统及实现方法
CN100589079C (zh) * 2008-05-09 2010-02-10 华为技术有限公司 一种数据共享的方法、系统和装置
CN101770403B (zh) * 2008-12-30 2012-07-25 北京天融信网络安全技术有限公司 一种多核平台上控制系统配置并发与同步的方法
CN102209016B (zh) * 2010-03-29 2014-02-26 成都市华为赛门铁克科技有限公司 一种数据处理方法、装置和数据处理系统
WO2012106905A1 (zh) * 2011-07-20 2012-08-16 华为技术有限公司 报文处理方法及装置
CN102508621B (zh) * 2011-10-20 2015-07-08 珠海全志科技股份有限公司 一种在嵌入式系统上脱离串口的调试打印方法和装置
CN103218176B (zh) * 2013-04-02 2016-02-24 中国科学院信息工程研究所 数据处理方法及装置
CN103368944B (zh) * 2013-05-30 2016-05-25 华南理工大学广州学院 一种内存共享网络架构及其协议规范
CN104750424B (zh) * 2013-12-30 2018-12-18 国民技术股份有限公司 一种存储系统及其非易失性存储器的控制方法
CN104750425B (zh) * 2013-12-30 2018-12-18 国民技术股份有限公司 一种存储系统及其非易失性存储器的控制方法
US9917920B2 (en) 2015-02-24 2018-03-13 Xor Data Exchange, Inc System and method of reciprocal data sharing
CN106330770A (zh) * 2015-06-29 2017-01-11 深圳市中兴微电子技术有限公司 一种共享缓存分配方法及装置
US10768935B2 (en) * 2015-10-29 2020-09-08 Intel Corporation Boosting local memory performance in processor graphics
US10291739B2 (en) * 2015-11-19 2019-05-14 Dell Products L.P. Systems and methods for tracking of cache sector status
CN105743803B (zh) * 2016-01-21 2019-01-25 华为技术有限公司 一种共享缓存的数据处理装置
CN106716975B (zh) * 2016-12-27 2020-01-24 深圳前海达闼云端智能科技有限公司 传输链路的续传方法、装置和系统
US20180203807A1 (en) * 2017-01-13 2018-07-19 Arm Limited Partitioning tlb or cache allocation
CN109491587B (zh) 2017-09-11 2021-03-23 华为技术有限公司 数据访问的方法及装置
CN107656894A (zh) * 2017-09-25 2018-02-02 联想(北京)有限公司 一种多主机处理系统和方法
CN110058947B (zh) * 2019-04-26 2021-04-23 海光信息技术股份有限公司 缓存空间的独占解除方法及相关装置
CN112532690B (zh) * 2020-11-04 2023-03-24 杭州迪普科技股份有限公司 一种报文解析方法、装置、电子设备及存储介质
US11960544B2 (en) * 2021-10-28 2024-04-16 International Business Machines Corporation Accelerating fetching of result sets
CN114079668B (zh) * 2022-01-20 2022-04-08 檀沐信息科技(深圳)有限公司 基于互联网大数据的信息采集整理方法及系统
CN115098426B (zh) * 2022-06-22 2023-09-12 深圳云豹智能有限公司 Pcie设备管理方法、接口管理模块、pcie系统、设备和介质
CN117234431B (zh) * 2023-11-14 2024-02-06 苏州元脑智能科技有限公司 缓存管理方法、装置、电子设备及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1510589A (zh) * 2002-11-19 2004-07-07 松下电器产业株式会社 共享存储器数据传送设备
EP1703404A1 (fr) * 2005-03-16 2006-09-20 Amadeus s.a.s Méthode et système pour maintenir la cohérence d'une mémoire cache utilisée par de multiples processus indépendants
CN101089829A (zh) * 2007-08-01 2007-12-19 杭州华三通信技术有限公司 一种共享缓存系统及实现方法

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE68917326T2 (de) * 1988-01-20 1995-03-02 Advanced Micro Devices Inc Organisation eines integrierten Cachespeichers zur flexiblen Anwendung zur Unterstützung von Multiprozessor-Operationen.
US5175837A (en) * 1989-02-03 1992-12-29 Digital Equipment Corporation Synchronizing and processing of memory access operations in multiprocessor systems using a directory of lock bits
US5394555A (en) * 1992-12-23 1995-02-28 Bull Hn Information Systems Inc. Multi-node cluster computer system incorporating an external coherency unit at each node to insure integrity of information stored in a shared, distributed memory
US5630063A (en) * 1994-04-28 1997-05-13 Rockwell International Corporation Data distribution system for multi-processor memories using simultaneous data transfer without processor intervention
US6324623B1 (en) * 1997-05-30 2001-11-27 Oracle Corporation Computing system for implementing a shared cache
US6161169A (en) * 1997-08-22 2000-12-12 Ncr Corporation Method and apparatus for asynchronously reading and writing data streams into a storage device using shared memory buffers and semaphores to synchronize interprocess communications
DE60041444D1 (de) * 2000-08-21 2009-03-12 Texas Instruments Inc Mikroprozessor
US6738864B2 (en) * 2000-08-21 2004-05-18 Texas Instruments Incorporated Level 2 cache architecture for multiprocessor with task—ID and resource—ID
US6658525B1 (en) * 2000-09-28 2003-12-02 International Business Machines Corporation Concurrent access of an unsegmented buffer by writers and readers of the buffer
JP4012517B2 (ja) * 2003-04-29 2007-11-21 インターナショナル・ビジネス・マシーンズ・コーポレーション 仮想計算機環境におけるロックの管理
JP2007241612A (ja) * 2006-03-08 2007-09-20 Matsushita Electric Ind Co Ltd マルチマスタシステム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1510589A (zh) * 2002-11-19 2004-07-07 松下电器产业株式会社 共享存储器数据传送设备
EP1703404A1 (fr) * 2005-03-16 2006-09-20 Amadeus s.a.s Méthode et système pour maintenir la cohérence d'une mémoire cache utilisée par de multiples processus indépendants
CN101089829A (zh) * 2007-08-01 2007-12-19 杭州华三通信技术有限公司 一种共享缓存系统及实现方法

Also Published As

Publication number Publication date
CN100489814C (zh) 2009-05-20
CN101089829A (zh) 2007-12-19
US20100138612A1 (en) 2010-06-03

Similar Documents

Publication Publication Date Title
WO2009015549A1 (fr) Système à mémoire cache partagée, son procédé de mise en œuvre et son logiciel de mise en œuvre
US10732879B2 (en) Technologies for processing network packets by an intelligent network interface controller
TWI543073B (zh) 用於多晶片系統中的工作調度的方法和系統
KR20190049508A (ko) 데이터 송수신장치 및 데이터 송수신장치의 동작 방법
CN111459417B (zh) 一种面向NVMeoF存储网络的无锁传输方法及系统
TWI547870B (zh) 用於在多節點環境中對i/o 存取排序的方法和系統
TW201539190A (zh) 用於多節點系統中的記憶體分配的方法和裝置
TW201543218A (zh) 具有多節點連接的多核網路處理器互連之晶片元件與方法
US10606753B2 (en) Method and apparatus for uniform memory access in a storage cluster
US10951741B2 (en) Computer device and method for reading or writing data by computer device
JP7512454B2 (ja) ファブリックを介したnvmエクスプレス
JP2000172457A5 (ja) 通信制御方法、機器、ホスト装置、周辺装置及び制御方法
JP2008086027A (ja) 遠隔要求を処理する方法および装置
WO2015027806A1 (zh) 一种内存数据的读写处理方法和装置
WO2018024173A1 (zh) 报文处理方法及路由器
CN102843435A (zh) 一种在集群系统中存储介质的访问、响应方法和系统
TW200947957A (en) Non-block network system and packet arbitration method thereof
WO2014101502A1 (zh) 基于内存芯片互连的内存访问处理方法、内存芯片及系统
US20170034267A1 (en) Methods for transferring data in a storage cluster and devices thereof
CN116260887A (zh) 数据传输方法、数据发送装置、数据接收装置和存储介质
CN104899105A (zh) 一种进程间通信方法
CN109167740B (zh) 一种数据传输的方法和装置
CN113778937A (zh) 用于执行片上网络(NoC)中的事务聚合的系统和方法
CN100391200C (zh) 一种数据传送方法
JP2008097273A (ja) ネットワークインタフェース装置、ネットワークインタフェース制御方法、情報処理装置、データ転送方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08772957

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08772957

Country of ref document: EP

Kind code of ref document: A1