WO2015165298A1 - 计算机,控制设备和数据处理方法 - Google Patents

计算机,控制设备和数据处理方法 Download PDF

Info

Publication number
WO2015165298A1
WO2015165298A1 PCT/CN2015/072672 CN2015072672W WO2015165298A1 WO 2015165298 A1 WO2015165298 A1 WO 2015165298A1 CN 2015072672 W CN2015072672 W CN 2015072672W WO 2015165298 A1 WO2015165298 A1 WO 2015165298A1
Authority
WO
WIPO (PCT)
Prior art keywords
application request
control device
resource allocation
computer
label
Prior art date
Application number
PCT/CN2015/072672
Other languages
English (en)
French (fr)
Inventor
包云岗
马久跃
隋秀峰
任睿
张立新
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to MX2016011157A priority Critical patent/MX360278B/es
Priority to CA2935114A priority patent/CA2935114C/en
Priority to JP2016553382A priority patent/JP6475256B2/ja
Priority to SG11201605623PA priority patent/SG11201605623PA/en
Priority to EP15785332.6A priority patent/EP3076296A4/en
Priority to KR1020167019031A priority patent/KR101784900B1/ko
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to AU2015252673A priority patent/AU2015252673B2/en
Priority to BR112016016326-5A priority patent/BR112016016326B1/pt
Priority to RU2016134457A priority patent/RU2651219C2/ru
Publication of WO2015165298A1 publication Critical patent/WO2015165298A1/zh
Priority to PH12016501374A priority patent/PH12016501374A1/en
Priority to US15/335,456 priority patent/US10572309B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5044Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues

Definitions

  • the present invention relates to the field of computers, and in particular, to a computer, a control device, and a data processing method.
  • multiple applications can share resources within the computer. For example, multiple applications can simultaneously request resources from memory, thereby increasing the utilization of memory resources. However, multiple applications interfere with each other when sharing resources, making some important applications less prioritized, which affects the quality of service.
  • Embodiments of the present invention provide a computer, a control device, and a data processing method for improving service quality requested by an application.
  • a first aspect of the embodiments of the present invention provides a computer, where the computer includes a processing unit and a control device;
  • the processing unit is configured to add a label to the application request, and send an application request after the label is added to the control device;
  • the control device is configured to receive an application request after adding the label, according to the label and a pre-saved resource allocation policy determining an amount of resources allocated to the application request, the resource allocation policy including a correspondence between the tag and an amount of resources allocated to the application request; The component processing the application request according to the amount of resources allocated to the application request.
  • control device further includes a buffer, wherein the buffer includes at least two queues, wherein each queue corresponds to a certain range of resources, and each The priority of the queues is different;
  • the control device is specifically configured to select, according to the quantity of resources allocated to the application, a queue corresponding to the application request from the at least two queues, and save the application request in the application request Corresponding queue;
  • the component of the computer is configured to obtain and execute the application request from a queue corresponding to the application request.
  • control device further includes a processor and a cache, where the resource allocation policy is stored in the cache;
  • the processor is further configured to acquire the resource allocation policy from the cache.
  • the resource allocation policy includes a control table, where the control table includes multiple entries, and the multiple entries are An entry includes a correspondence between the tag and an amount of resources allocated to the application request;
  • the processor is specifically configured to send a query instruction to the cache, where the query includes the label;
  • the cache is configured to obtain an entry corresponding to the label according to the query instruction, and send the entry corresponding to the label to a processor of the control device.
  • control device further includes a programming interface, the programming The interface is used to modify the resource allocation policy.
  • the computer further includes a memory, where the node management software is stored in the memory;
  • the processing unit is further configured to define the resource allocation policy by using the node management software
  • the control device is further configured to acquire the resource allocation policy from the node management software, and write the resource allocation policy into the cache.
  • the processing unit further includes a tag register
  • the processing unit is further configured to define the label by the node management software, and write the label into the label register by using the node management software;
  • the processing unit is further for reading the tag from the tag register.
  • a second aspect of the embodiments of the present invention provides a control device, where the control device is disposed on a component of a computer; the control device includes a processor;
  • the processor is configured to receive the application request after the tag is added, and determine, according to the tag and a pre-saved resource allocation policy, a resource quantity allocated to the application request, where the resource allocation policy includes the label and the assigned to the office Determining a correspondence between resource amounts requested by the application; and means for instructing, by the component of the computer, to process the application request according to the amount of resources allocated to the application request.
  • control device further includes a buffer, where the buffer includes at least two queues, where each queue corresponds to a certain range of resources, and each queue Different priorities;
  • the processor is specifically configured to select, according to the quantity of resources allocated to the application, a queue corresponding to the application request from the at least two queues, and save the application request in the application request Corresponding queue;
  • the component of the computer is configured to obtain and execute the application request from a queue corresponding to the application request.
  • control device further includes a high speed a buffer in which the resource allocation policy is stored.
  • the processor is further configured to acquire the resource allocation policy from the cache.
  • the resource allocation policy includes a control table, where the control table includes multiple entries, and the multiple entries are An entry includes a correspondence between the tag and an amount of resources allocated to the application request;
  • the processor is specifically configured to send a query instruction to the cache, where the query includes the label;
  • the cache is configured to obtain an entry corresponding to the label according to the query instruction, and send the entry corresponding to the label to the processor.
  • control device further includes a programming interface, the programming interface is used to The resource allocation policy is modified.
  • the resource allocation policy is performed by the computer through the node management software Defining and transmitting to the control device, wherein the node management software is stored in a memory of the computer.
  • a third aspect of the embodiments of the present invention provides a data processing method, where the method is applied to a control device, where the control device is disposed in a component of a computer; the method includes:
  • the control device receives an application request carrying a tag
  • the control device Determining, by the control device, the amount of resources allocated to the application request according to the label and a pre-saved resource allocation policy, where the resource allocation policy includes a correspondence between the label and a resource amount allocated to the application request ;
  • the control device instructs a component of the computer to process the application request according to the amount of resources allocated to the application request.
  • control device further includes a buffer, wherein the buffer includes at least two queues, where each queue corresponds to a certain range The amount of resources, and each queue has a different priority;
  • the control device instructs the component of the computer to process the application request according to the amount of resources allocated to the application request, including:
  • the control device selects a queue corresponding to the application request from the at least two queues according to the amount of resources allocated to the application, and saves the application request in a queue corresponding to the application request And causing the component of the computer to obtain and execute the application request from a queue corresponding to the application request.
  • control device further includes a processor and a cache, where the resource allocation policy is stored in the cache;
  • the method also includes the processor of the control device obtaining the resource allocation policy from the cache.
  • the resource allocation policy includes a control table, where the control table includes multiple entries, and the multiple entries are An entry includes a correspondence between the tag and an amount of resources allocated to the application request;
  • the processor of the control device Obtaining, by the processor of the control device, the resource allocation policy from the cache, the processor of the control device sends a query instruction to the cache, where the query instruction includes the label;
  • the cacher obtains an entry corresponding to the tag according to the query instruction, and sends the entry corresponding to the tag to the processor of the control device.
  • An embodiment of the present invention provides a computer, where the computer includes a processing unit and a control device, wherein the processing unit adds a label to an application request, and sends an application request after adding the label to the control device, where the control device
  • the amount of resources allocated to the application request is determined according to the tag and the pre-saved resource allocation policy, and the component of the computer is instructed to process the application request according to the amount of resources allocated to the application request.
  • the components of the computer can process the application request according to the amount of resources allocated to the application request, which prevents a plurality of application requests from mutually occupying resources and improves the quality of service.
  • FIG. 1 is a system architecture diagram of a computer according to an embodiment of the present invention.
  • 2a is a system architecture diagram of another computer according to an embodiment of the present invention.
  • FIG. 2b is a schematic structural diagram of a control device according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of another control device according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of still another control device according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of still another control device according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a control plane network according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of node management software according to an embodiment of the present invention.
  • FIG. 8 is a schematic flowchart diagram of a data processing method according to an embodiment of the present invention.
  • FIG. 9 is a schematic flowchart diagram of another data processing method according to an embodiment of the present invention.
  • FIG. 10 is a schematic flowchart diagram of still another data processing method according to an embodiment of the present invention.
  • the embodiment of the invention provides a computer, a control device and a data processing method. Used to improve the quality of service for applications.
  • FIG. 1 is a schematic structural diagram of a system of a computer 10 according to an embodiment of the present invention.
  • the computer 10 includes a plurality of processing units 11, a plurality of control devices 66, and a plurality of computer components 33 (referred to as short in FIG. 1). For the parts).
  • the components of the computer referred to in the embodiments of the present invention refer to components of a computer that may be occupied by resources by multiple applications.
  • the processing unit 11 refers to one of the processor cores of the same function owned by a central processing unit (CPU) for performing various operation commands such as reading and writing.
  • CPU central processing unit
  • the component 33 of the computer comprises: an on-chip high-speed interconnection network, components of a computer directly connected to the on-chip high-speed interconnection network, such as a cache (also called cache), a memory, a graphics processing unit (GPU), Video memory, etc., may also include an I/O interconnection network and I/O devices connected to the I/O interconnection network, such as a magnetic disk (also referred to as a hard disk), a network card, a display, and the like.
  • the on-chip high-speed interconnect network is a connector for connecting a plurality of processing units 11, and the on-chip high-speed interconnect network is also connected with a cache, a memory, a graphics processor, a memory, and the like.
  • the resources allocated to the application can be cache space; for memory, the resources allocated to the application can be memory space; for the graphics processor, the resources allocated to the application can be Hardware acceleration resources; for video memory, the resources allocated to the application can be the memory space.
  • an I/O interconnect network (also known as a south bridge) can be connected to the on-chip high-speed interconnect network.
  • An I/O interconnect network is a device used to control I/O devices.
  • the computer component 33 also includes I/O devices that are directly connected to the I/O interconnect network, such as a magnetic disk (also known as a hard disk), a network card, a display, and the like.
  • I/O devices that are directly connected to the I/O interconnect network, such as a magnetic disk (also known as a hard disk), a network card, a display, and the like.
  • computer 10 may process multiple applications, and these applications require resources that occupy components of the computer (eg, memory).
  • resources eg, memory
  • the limited resources available may cause some important applications to be processed in a timely manner and affect the quality of service.
  • control device 66 is provided on components of a computer that may be requested or occupied by multiple applications.
  • the control device 66 is configured to allocate different amounts of resources to the application according to the type of the application to process the application.
  • Components of a computer herein that may be requested or occupied by multiple applications include, but are not limited to, an on-chip high-speed interconnect network, a cache, a memory, a graphics processor, a memory, and an I/O interconnect network.
  • control device 66 may be disposed only on the components of one of the components of the plurality of computers; or the control device 66 may be disposed on the components of the plurality of computers; Control device 66 can be provided on the components of all of the computers previously described.
  • the control device 66 In order for the control device 66 to recognize different types of applications, it is necessary to identify the type of the application request at the source generated by the application request (the application corresponding to the request), and label the subsequent components to a certain computer. When the upper control device 66 sends the application request, the control device 66 can perform different processing according to the tag for different types of applications. It should be noted that, in the embodiment of the present invention, the application and the application request represent the same meaning. Moreover, the application request in the embodiment of the present invention includes various instructions generated internally by the computer, and various instructions received from outside the computer, such as a file access request, a video play request, a memory access request, an IO request, and an internal connection (Interconnect). Request and so on.
  • the source end generated by the application request here may be the processing unit 11 or an I/O device (for example, a network card).
  • the source generated by the application request is the processing unit 11; when the application request comes from outside the computer 10, for example, receiving an application request sent by using the Internet, the source generated by the application request may be a network card or Other input and output devices.
  • the way to tag can be:
  • a tag register 77 (shown in Figure 2a) is provided in the processing unit 11, and a register value is stored in the tag register 77.
  • the processing unit 11 When the processing unit 11 generates an application request, the processing unit 11 reads the register by The value adds a label to the application request, which is the register value.
  • the tags are defined by the node management software (described in more detail later) for application requests.
  • the node management software may be a module in the operating system or a module in an intermediate software layer between the operating system and the computer hardware, and runs on the processing unit 11. After the node management software defines a label for an application request, the operating system can write the label into the context of the corresponding process of the application request, and then write the context of the application request corresponding process to the register.
  • the processing unit 11 itself can contain a plurality of registers, one of the plurality of registers can be set as a tag register 77 in which the tag of the application is saved.
  • the register value is read into the tag register 77, and the register value is written as a tag in the application request.
  • the expression of the label can be the ID of the application, letters, numbers, etc., which are not limited herein.
  • Another alternative implementation is:
  • a new register is added to processing unit 11, and the new register is defined as a tag register 77, which is used to hold the tag of the application.
  • the latter processing manner is the same as that of the previous embodiment, and details are not described herein again.
  • an implementation manner is that the network card itself does not perform an action of tagging the application request.
  • the application request is an application request with a label. That is, the sender of the application request can tag the application request before sending the application request.
  • each server or computer can determine the label of the application request through negotiation, or can set a label server for defining and sending the label of the application request to each server. Embodiments such as these are within the scope of protection of embodiments of the present invention.
  • Another implementation manner is that when the network card receives the message packet, the application request is obtained by parsing the message packet, and then the application request is tagged. In this case, the manner in which the network card requests the application to be tagged is the same as the processing unit described above. 11 The way of labeling is similar, and will not be repeated here.
  • components in the computer 10 may also add a label to the application request, such as an I/O interconnection network, and the implementation manner is similar to that of the processing unit 11 or the network card, and the embodiment of the present invention does not constitute a labeling component. Parts are subject to any restrictions.
  • control device 66 is configured to receive the application request after the tag is added, and determine, according to the tag and a pre-saved resource allocation policy, an amount of resources allocated to the application request, where the resource allocation policy includes the a correspondence between the tag and the amount of resources allocated to the application request; and is further configured to instruct the component of the computer to process the application request according to the amount of resources allocated to the application request.
  • An embodiment of the present invention provides a computer, where the computer includes a processing unit and a control device, wherein the processing unit adds a label to an application request, and sends an application request after adding the label to the control device, where the control device
  • the amount of resources allocated to the application request is determined according to the tag and the pre-saved resource allocation policy, and the component of the computer is instructed to process the application request according to the amount of resources allocated to the application request.
  • the components of the computer can process the application request according to the amount of resources allocated to the application request, which prevents a plurality of application requests from mutually occupying resources and improves the quality of service.
  • control device 66 The structure and function of the control device 66 will be mainly described below.
  • Control device 66 refers to a device on any of the components of the computer within computer 10.
  • the control device 66 may be a control device embedded in the controller or added to the original The control device to which the controller is connected; when the components of some computers do not themselves contain the controller, the control device 66 may be a new controller or control device connected to the components of the computer.
  • control device 66 includes a processor 600a;
  • the control device 66 is configured to receive the application request after the tag is added, and determine, according to the tag and a pre-saved resource allocation policy, a resource quantity allocated to the application request, where the resource allocation policy includes the label and the allocation Corresponding relationship between the amount of resources requested by the application; and means for instructing the component of the computer to process the application request according to the amount of resources allocated to the application request.
  • control device 66 may also include a buffer (also known as buffer) 600b.
  • buffer 600b also known as buffer
  • the processor 600a is configured to save the application request after the tag is added in the buffer 600b; read the tag from the buffer 600b, and determine according to the tag and a pre-saved resource allocation policy.
  • the buffer 600b may also be a register in the processor 600a.
  • the processing may be: the processor 600a is configured to save the application request after the label is added in its register; read the label from the register, according to the label and pre-save Resource allocation policy determines an amount of resources allocated to the application request, the resource allocation policy including a correspondence between the tag and an amount of resources allocated to the application request; indicating a component of the computer according to the The amount of resources allocated to the application request, processing the application request.
  • control device 66 can include a processor 600i, a buffer (also referred to as a buffer) 600b, and a queue 600c.
  • the processor 600a may be a Field-Programmable Gate Array (FPGA) or other programmable device.
  • a resource allocation policy is built in the processor 600i, the resource allocation policy may be a control table (as shown in Table 1), and the control table is editable.
  • each entry in Table 1 corresponds to a label.
  • each entry of the control table includes a plurality of "attributes", and the "attributes” represents the amount of resources allocated to the application request corresponding to the tag.
  • the amount of resources may be various, for example, may include target quality of service or IPC (Instruction per Cycle) or response time or maximum tolerance.
  • the value range of "Properties” can be set by the user. For example, the value interval can be defined as no less than 30%, or no more than 80%.
  • each entry also includes a plurality of "states", which represent the amount of resources currently consumed by the application corresponding to the tag, and the value of the "status" can be monitored and updated in real time.
  • the resource allocation strategy may also be a piece of firmware code built into the FPGA.
  • the embodiment of the present invention does not limit the manner in which the resource allocation policy is saved.
  • the buffer 600b is a temporary buffer area.
  • the control device 66 When the control device 66 receives the application request with the tag, it first puts it into the buffer 600b for temporary storage.
  • the queue 600c is also a temporary buffer area, which may be located in the same temporary buffer area as the buffer 600b, or may be separated from the buffer 600b as a temporary buffer area.
  • the queue 600c is used to save application requests processed by the processor 600i.
  • a plurality of queues may be included in the queue 600c, and different queues correspond to different address segments in the buffer 600b. Different queues have different priorities, and the priority is different in the order in which the components of the computer execute the application requests in the respective queues. This means that the resources allocated to different queues are also different.
  • control device 66 may also include a programming interface 600d.
  • the programming interface 600d is used to implement an address space mapping mechanism that can map the control table built into the processor 600i into the physical address space of the computer 10.
  • the node management software can access the physical address space of the computer 10 to edit the control table.
  • the programming interface 600d can provide a variety of functions for adding, modifying, or deleting entries stored in the control table.
  • the processor 600i may also provide the value of each "state" in each entry in the control table to the node management software, so that the node management software collects statistics on the value of the "state" of each application request, and then allocates resources. The strategy is adjusted.
  • the processor 600i may obtain a tagged application request from the buffer 600b, and query the corresponding entry in the table according to the tag, thereby obtaining the “attribute” of the application request. Since the "attribute" of the application request indicates the amount of resources allocated to the application request, the processor 600i may select the application from the at least two queues according to the amount of resources allocated to the application request. Request the corresponding queue and put the application request into the corresponding queue.
  • the processing of the application request by the component of the corresponding computer may be indicated.
  • the "components of the corresponding computer” herein refer to the components of the computer to which the control device belongs. For example, if the control device refers to a control device on the cache, then the “component of the corresponding computer” herein refers to the cache.
  • the processing for instructing the application request by the component of the corresponding computer herein may be that the processor 600i retrieves the application request from the corresponding queue, sends it to a component of the corresponding computer, or may be a corresponding computer.
  • the component gets the application request from the corresponding queue.
  • control device 66 may further include a data forwarder 600j.
  • the application request may be taken out from the queue and sent to the data forwarder 600j.
  • the data forwarder 600j is configured to forward the application request to a component of the corresponding computer. That is to say, the component of the corresponding computer here obtains the application request from the corresponding queue may be obtained by the data forwarder 600j.
  • the processor 600i may perform some pre-processing operations on the application request, such as compression, encryption, etc., and then put the application request after the pre-processing operation into the corresponding queue 600c. .
  • control device 66 As shown in FIG. 4, another alternative embodiment for the control device 66 is:
  • the control device 66 includes a buffer (also known as a buffer) 600b, a queue 600c, a microprocessor 600e, and a high speed memory (also known as a cache) 600f.
  • the buffer 600b, the queue 600c, and the buffer area shown in FIG. 3 are the same as the queue, and are not described here.
  • the microprocessor 600e may be another CPU-like controller such as a CPU.
  • the difference from the processor 600a shown in FIG. 3 is that the processor 600i functions as a programmable device in which a resource allocation policy is built in, and the resource allocation policy is editable.
  • the microprocessor 600e performs the functions of the CPU but cannot have a built-in control table therein. Therefore, the control device 66 shown in FIG. 4 also includes a cache 600f. A resource allocation policy is stored in the cache 600f, the resource allocation policy being a program code similar to the control table function.
  • control device 66 when control device 66 receives a tagged application request, it first places the application request in buffer 600b.
  • the microprocessor 600e may obtain the tagged application request from the application request queue held by the buffer 600b, and read the resource allocation policy from the cache 600f into the buffer 600b, according to the tag and the resource allocation policy.
  • the amount of resources allocated to the application request from the at least two queues, selecting a queue corresponding to the application request, and placing the application request into a corresponding queue.
  • the microprocessor 600e then retrieves the application request from the corresponding queue and sends it to the components of the corresponding computer.
  • control device 66 may further include a data forwarder 600j.
  • the application request may be fetched from the queue and sent to the data forwarder 600j.
  • the data forwarder 600j is configured to forward the application request to the component parts of the corresponding computer.
  • the subsequent processing manner is the same as the embodiment shown in FIG. 3, and details are not described herein again.
  • control logic may be included in the control device 66 shown in FIG. 4 for modifying the resource allocation policy saved in the cache 600f.
  • the microprocessor 600e may perform some pre-processing operations on the application request, such as compression, encryption, etc., and then put the application request after the pre-processing operation into the corresponding Queue 600c.
  • the label should also be The request and the resource allocation policy saved in the cache 600f are read into the own buffer, the application request is processed in its own buffer, and the microprocessor 600e is placed in the corresponding queue 600c according to the processing result.
  • control device 66 For the control device 66, another alternative embodiment is:
  • control device 66 may include buffer 600b, comparison control logic 600g, high speed memory 600f, and queue 600c.
  • the comparison control logic 600g referred to herein may be an Application Specific Integrated Circuits (ASIC) or other integrated circuit.
  • ASIC Application Specific Integrated Circuits
  • Buffer 600b is identical to the buffer described previously.
  • a control table (Table 1) is stored in the high speed memory 600f.
  • the control device 66 When the control device 66 receives a tagged application request, it first places the application request in the queue of the application request.
  • the queue can be part of the buffer space in buffer 600b or it can be a separate buffer.
  • the comparison control logic 600g reads the application request from the queue into the buffer 600b (or the buffer of the comparison control logic 600g), and issues a read instruction to the high speed memory 600f according to the tag requested by the application, requesting the high speed memory 600f to return.
  • the entry corresponding to the label The contents of the entry will be loaded into the buffer 600b (or the buffer of the comparison control logic 600g), and the comparison control logic 600g will be in the buffer 600b (or the buffer of the comparison control logic 600g) according to the contents of the entry.
  • the queue corresponding to the application request is selected, so that the application request is put into the queue 600c.
  • the comparison control logic 600g then fetches the application request from the corresponding queue and sends it to the components of the corresponding computer.
  • control device 66 may further include a data forwarder 600j.
  • the comparison control logic 600g puts different applications into different queues in the queue 600c
  • the application request may be fetched from the queue and sent to the data forwarder 600j.
  • the data forwarder 600j is configured to forward the application request to the component parts of the corresponding computer.
  • comparison control logic 600g may perform some pre-processing operations on the application, such as compression, encryption, etc., in the buffer 600b.
  • control device 66 may also include a programming interface 600d for editing the control table stored in cache 600f.
  • a programming interface 600d for editing the control table stored in cache 600f.
  • the description of the programming interface 600d in the embodiment shown in FIG. 3 can be referred to.
  • control devices 66 on the various components within the computer 10 may not be identical. Specifically, the resource allocation policies saved by the respective control devices 66 may not be exactly the same, for example, for the same application request, when it needs to access the memory, the amount of resources allocated to the memory may reach 80%, and when it needs When outputting through an I/O device, the amount of resources allocated to it by the I/O interconnect network may be only 70%.
  • the resource amount allocated to the application request may be determined according to the label of the application request and the pre-saved resource allocation policy, and the component of the computer is instructed to be allocated to the application according to the The amount of resources requested, processing the application request.
  • the components of the computer can process the application request according to the amount of resources allocated to the application request, which prevents a plurality of application requests from mutually occupying resources and improves the quality of service.
  • a control device network may be formed on the control device 66 on the components of each computer for connecting all of the control devices 66, as shown by the dashed lines in FIG. 1 or FIG.
  • Each control device includes a physical access point, and accesses the control device network through the physical access point.
  • the control device network may transmit data through a Peripheral Component Interconnect Express (PCIe) protocol or other protocol, and the type of the protocol is not limited in the embodiment of the present invention.
  • PCIe Peripheral Component Interconnect Express
  • the node management software (not shown in FIG. 1) may also be included in the embodiment of the present invention.
  • the node management software may be a module in the operating system or an intermediate software layer between the operating system and the computer hardware (hypervisor).
  • One of the modules runs on one or more of the processing units 11.
  • the node management software is used to manage all the control devices 66 through the control device network, for example, to perform initialization operations on the respective control devices 66, collect "state" values of the respective control devices 66, and determine or adjust resources according to the collected "status" values. Assigning policies to each control device 66 Operations such as resource allocation strategies.
  • an embodiment is to establish a set of dedicated networks in the computer 10, and all the control devices 66 are connected by one root router, and the physical access points of all the control devices 66 are This private network connection.
  • the private network may provide a set of communication protocols that are responsible for defining the message packet format for accessing each of the control devices 66.
  • the message packet can include, but is not limited to, controlling a device number or ID, controlling a device command (eg, adding a resource allocation policy or deleting a resource allocation policy or modifying a resource allocation policy), and controlling device command parameters.
  • the message packets may also be encapsulated by the PCIe protocol or other protocols for transmission.
  • each control device 66 performs data interaction with the node management software by using an address space mapping manner. Specifically, each control device 66 maps its control table or resource allocation policy to the physical address space of the computer 10, and the node management software can access these address spaces to implement editing of the control table or resource allocation policy.
  • the node management software may include a control device driver module 701, a monitoring management module 702, and a user programming interface 703.
  • the control device driver module 701 is configured to scan the control device 66 that identifies the components of the new computer, and initialize the control device 66; and send a resource allocation policy to the control device 66.
  • the control device driver module 701 is further configured to add, modify, or delete a resource allocation policy.
  • the monitoring management module 702 is configured to store the collected values of the “states” of the respective control devices 66, perform association analysis on the collected values of the “status” of the respective control devices 66, and determine a resource allocation policy according to the user requirements.
  • the user programming interface 703 is used to provide an application programming interface (API), and other software or applications can implement programming of the control device 66 through an API.
  • API application programming interface
  • the API includes at least the following interfaces: an initialization command, an add resource allocation policy command, a modify resource allocation policy command, a delete resource allocation policy command, and the like.
  • the control device shown in FIG. 2b or FIG. 3 or FIG. 4 or FIG. 5 is only applicable to the present embodiment.
  • One of the examples of the embodiments of the present invention is not specifically limited to the application of the present invention, and may be, for example, a specific integrated circuit, which, in any form, implements the functions of the control device in a computer.
  • the computer according to the embodiment of the present invention may be a personal computer, a server, a mobile phone, or a handheld computer.
  • the specific implementation form of the computer is not limited. This document is not described in other system examples or application scenarios.
  • the data processing method in the embodiment of the present invention may be implemented in the control device shown in FIG. 2b or FIG. 3 or FIG. 4 or FIG. As shown in Figure 8, it includes:
  • Step S101 The control device receives an application request that carries a tag.
  • the tagged application request may be from the processing unit 11 shown in Figure 1 or Figure 2a, or it may be from a network card.
  • the processing unit 11 needs to add a tag to the application when receiving or generating an application request.
  • the manner in which the processing unit 11 requests the application to add a label may refer to the embodiment shown in FIG. 2a, and details are not described herein again.
  • the application request is an application request with a tag.
  • the application request is obtained by parsing the message package, and then the application request is tagged.
  • the operating system or the hypervisor may perform initialization operations on the respective control devices 66 in the computer 10 through the node management software, so that the respective control devices are in an active state.
  • the node management software transmits the resource allocation policy to the respective control devices 66 through the control device network.
  • Step S102 The control device determines, according to the label and a pre-saved resource allocation policy, a resource amount allocated to the application request, where the resource allocation policy includes the label and a resource amount allocated to the application request. Correspondence.
  • control device writes the label-carrying application request to the first buffer of the control device (buffer 600b shown in FIG. 2b), and reads the label from the first buffer.
  • the application request is carried.
  • the tag may be associated with a tag in the resource allocation policy, and the control device may determine the amount of resources allocated to the application request based on the tag and a pre-saved resource allocation policy.
  • the amount of resources herein may be the number of resources allocated by the component of the computer where the control device is located to the application request, or may be a proportional value, and may also include priority information and the like.
  • the amount of the resource may be the size of the memory space, or the ratio of the memory space (for example, 80%), or other information (see the table).
  • the embodiment of the present invention does not limit the representation of the amount of resources, as long as the priority or speed of processing the application request by the component is in the protection scope of the embodiment of the present invention.
  • resource allocation policy herein may be built in the processor of the control device (refer to the embodiment shown in FIG. 3), or may be stored in the cache of the control device (refer to FIG. 3). Implementation)).
  • Step S103 The control device instructs a component of the computer to process the application request according to the amount of resources allocated to the application request.
  • Determining, by the component of the computer, the amount of resources allocated to the application, the processing the application request may be: sending the resource amount allocated to the application request and the application request to the
  • the component may also be a method of transmitting an application request to the component and informing the component to process the application request.
  • the embodiment of the present invention may determine, by the control device, the label carried by the application request, and the correspondence between the label and the resource amount allocated to the application, determine the amount of resources allocated to the application request, and then indicate the components of the computer. Processing the application request according to the amount of resources allocated to the application request. Therefore, different application requests can be allocated different amounts of resources, thereby improving the quality of service.
  • the processed application request may be forwarded to the control component of another computer.
  • the device is processed in a manner similar to steps S101 to S103. It should be noted that the application request of the control device forwarded to the component of another computer also carries the tag.
  • the method includes:
  • Step S201 Same as step S101.
  • Step S202 The control device obtains a resource allocation policy from a cache (which may be simply referred to as a cache).
  • the control device may load the resource allocation policy into a buffer of its processor.
  • the resource allocation policy is a control table as shown in Table 1
  • the control device may send a query instruction to the cache, the query instruction including the label, and the cache according to the label Finding a corresponding entry in the control table, and returning the entry to the control device.
  • the control device loads the entry into a buffer of its processor. Specifically, the entry includes a correspondence between the label and an amount of resources allocated to the application request.
  • Step S203 The control device determines the amount of resources allocated to the application request according to the label and a pre-saved resource allocation policy.
  • Step S204 Same as step S103.
  • the embodiment of the present invention may determine, by the control device, the label carried by the application request, and the correspondence between the label and the resource amount allocated to the application, determine the amount of resources allocated to the application request, and then indicate the components of the computer. Processing the application request according to the amount of resources allocated to the application request. Therefore, different application requests can be allocated different amounts of resources, thereby improving the quality of service.
  • FIG. 1 Another embodiment of the data processing method of the embodiment of the present invention is described below, as shown in FIG. The method includes:
  • Step S301 Same as step S101.
  • Step S302 Same as step S102 and same as steps S202-S203.
  • Step S303 The control device selects a queue corresponding to the application request from the at least two queues according to the resource amount allocated to the application request, and saves the application request in the allocation to the The amount of resources requested by the application corresponds to the queue.
  • control device may include a second buffer, and the queue is saved in a second buffer (as shown in FIG. 3 or FIG. 4 or FIG. 5).
  • the second buffer includes at least two queues, where each queue corresponds to a certain range of resources, and each queue has different priorities.
  • Step S304 The data forwarder (for example, the data forwarder 600j in FIG. 3 to FIG. 5) of the control device obtains the application request from the queue corresponding to the application request, and forwards the component request to the computer component. .
  • the data forwarder for example, the data forwarder 600j in FIG. 3 to FIG. 5
  • Step S305 The component of the computer obtains and executes the application request.
  • step S304 is an optional step, and the components of the computer may also obtain the application request directly from the corresponding queue.
  • the second buffer may include three queues, which are a high priority queue, a medium priority queue, and a low priority queue.
  • the ratio of the resources corresponding to the high priority queue is 70%-80%. Assuming that the amount of resources allocated by the application request is 76%, the application request is placed in a high priority queue.
  • a high priority queue means that the processing order takes precedence or is processed faster.
  • the components of the computer may preferentially obtain the application request from the high priority queue and execute it.
  • the control device may also refer to other factors, such as the amount of resources that the application requests to use currently, to consider which queue to put the application request into. Not a pair of application scenarios are described in this application.
  • each queue may contain multiple application requests to be processed, and the components of the computer may be advanced according to multiple pending application requests in each queue.
  • the first-out principle or other principles are dealt with.
  • control device may continue to take the application request from the queue in turn (when there are multiple application requests in the queue), and then send the application request.
  • the application request is processed for the components of the computer.
  • the embodiment of the present invention may determine, by the control device, the quantity of the resource allocated to the application request according to the label carried by the application request and the correspondence between the label and the resource amount allocated to the application request, according to the allocation to the
  • the application requesting resource quantity puts the application request into a corresponding queue, and then obtains the application request from the application request corresponding queue to send to the data forwarder, and the data forwarder forwards the application request to the
  • the components of the computer are executed, thereby improving the quality of service.
  • the components of the computer can feed back a message to the operating system to indicate that the application request has been processed.
  • the resource recovery message may be sent to the control device by using the node management software, where the resource recovery message is used to delete the resource allocation policy in the control device.
  • FIG. 8 to FIG. 10 The processing flow of FIG. 8 to FIG. 10 is further illustrated by taking an application request for processing a video playback as an example.
  • Step 1 The user clicks on a video file on the local computer.
  • Step 2 The CPU generates a memory access request for reading the video file.
  • Step 3 The CPU tags the memory access request and sends the tagged memory access request to the control device in the memory.
  • Step 4 The control device on the memory determines the memory space allocated to the memory access request according to the label, and then places the memory access request into the corresponding queue according to the allocated memory space.
  • Step 5 The memory control device takes the memory access request from the queue and sends it to the memory to execute the memory access request.
  • Step 6 After loading the video file in memory, send a response to the CPU.
  • Step 7 The CPU obtains the video file from the memory, and sends a hardware acceleration request to the GPU, and the GPU is required to perform hardware decoding on the video file, where the hardware acceleration request carries the video file and the label.
  • Step 8 The control device on the GPU determines the amount of hardware acceleration resources allocated to the video file according to the label, and then places the hardware acceleration request into the corresponding queue according to the allocated hardware acceleration resource amount.
  • Step 9 The control device on the GPU takes the hardware acceleration request from the queue and sends it to the GPU.
  • Step 10 The GPU performs hardware acceleration processing on the video file according to the hardware acceleration request.
  • Step 11 After the GPU processing is completed, if no further processing is performed on the video file, an output request may be sent to the south bridge (for example, the I/O control network shown in FIG. 1 or FIG. 2). Includes hardware-accelerated video files and tags.
  • the south bridge for example, the I/O control network shown in FIG. 1 or FIG. 2.
  • Step 12 The control device on the south bridge determines the bandwidth allocated to the video file according to the label, and then places the output request into the corresponding queue according to the allocated bandwidth.
  • Step 13 The south bridge takes the video file from the queue and sends it to the display.
  • Step 14 The display displays the video file.
  • the disclosed systems, devices, and The method can be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, or an electrical, mechanical or other form of connection.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the embodiments of the present invention.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention contributes in essence or to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Stored Programmes (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

一种计算机、控制设备和数据处理方法。所述计算机包括处理单元和控制设备;所述处理单元用于给应用请求增加标签,将增加标签后的应用请求发送给所述控制设备;所述控制设备用于接收所述增加标签后的应用请求,根据所述标签以及预先保存的资源分配策略确定分配给所述应用请求的资源量,所述资源分配策略包括所述标签与分配给所述应用请求的资源量之间的对应关系;还用于指示所述计算机的组成部件根据所述分配给所述应用请求的资源量,处理所述应用请求。所述计算机、控制设备和数据处理方法用于提高应用请求的服务质量。

Description

计算机,控制设备和数据处理方法
本申请要求于2014年4月30日提交中国专利局、申请号为201410182148.1、发明名称为“计算机,控制设备和数据处理方法”的中国专利申请,以及2014年11月24日提交中国专利局、申请号为201410682375.0、发明名称为“计算机,控制设备和数据处理方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及计算机领域,特别涉及一种计算机,控制设备和数据处理方法。
背景技术
为了提高计算机或者服务器的运行效率,多个应用程序可以在计算机内部实现资源共享。例如,多个应用程序可以同时向内存申请资源,由此提高内存资源的利用率。然而,多个应用程序在共享资源时会相互干扰,使得一些重要的应用程序得不到优先处理,从而影响了服务质量。
发明内容
本发明实施例提供了一种计算机,控制设备和数据处理方法,用以提高应用请求的服务质量。
本发明实施例第一方面提供了一种计算机,所述计算机包括处理单元和控制设备;
所述处理单元用于给应用请求增加标签,将增加标签后的应用请求发送给所述控制设备;
所述控制设备用于接收所述增加标签后的应用请求,根据所述标签以及 预先保存的资源分配策略确定分配给所述应用请求的资源量,所述资源分配策略包括所述标签与分配给所述应用请求的资源量之间的对应关系;还用于指示所述计算机的组成部件根据所述分配给所述应用请求的资源量,处理所述应用请求。
结合第一方面,在第一种可能的实施方式中,所述控制设备还包括缓冲区,其中,所述缓冲区包含至少两个队列,其中,每个队列对应一定范围的资源量,并且每个队列的优先级不同;
所述控制设备具体用于根据所述分配给所述应用请求的资源量,从所述至少两个队列中,选择所述应用请求对应的队列,并将所述应用请求保存在所述应用请求对应的队列中;
所述计算机的组成部件用于从所述应用请求对应的队列中获得并执行所述应用请求。
结合第一方面,在第二种可能的实施方式中,所述控制设备还包括处理器和高速缓存器,所述高速缓存器中存储有所述资源分配策略;
所述处理器还用于从所述高速缓存器中获取所述资源分配策略。
结合第一方面的第二种可能的实施方式,在第三种可能的实施方式中,所述资源分配策略包括控制表,所述控制表包括多个表项,所述多个表项中的一个表项包括所述标签与分配给所述应用请求的资源量之间的对应关系;
所述处理器具体用于向所述高速缓存器发送查询指令,所述查询指令中包括所述标签;
所述高速缓存器用于根据所述查询指令获得所述标签对应的表项,并将所述标签对应的表项发送给所述控制设备的处理器。
结合第一方面,或者第一方面的第一种至第一方面的第三种可能的实施方式,在本发明第四种可能的实施方式中,所述控制设备还包括编程接口,所述编程接口用于对所述资源分配策略进行修改。
结合第一方面,或者第一方面的第一种至第一方面的第四种可能的实施 方式,在第五种可能的实施方式中,所述计算机还包括存储器,所述存储器中存储有节点管理软件;
所述处理单元还用于通过所述节点管理软件定义所述资源分配策略;
所述控制设备还用于从所述节点管理软件获取所述资源分配策略,并将所述资源分配策略写入所述高速缓存器中。
结合第一方面的第五种可能的实施方式,在第六种可能的实施方式中,所述处理单元还包括标签寄存器;
所述处理单元还用于通过所述节点管理软件定义所述标签,并通过所述节点管理软件将所述标签写入所述标签寄存器;
所述处理单元还用于从所述标签寄存器中读取所述标签。
本发明实施例第二方面提供了一种控制设备,所述控制设备设置于的计算机的组成部件上;所述控制设备包括处理器;
所述处理器用于接收所述增加标签后的应用请求,根据所述标签以及预先保存的资源分配策略确定分配给所述应用请求的资源量,所述资源分配策略包括所述标签与分配给所述应用请求的资源量之间的对应关系;还用于指示所述计算机的组成部件根据所述分配给所述应用请求的资源量,处理所述应用请求。
结合第二方面,在第一种可能的实施方式中,所述控制设备还包括缓冲区,所述缓冲区包含至少两个队列,其中,每个队列对应一定范围的资源量,并且每个队列的优先级不同;
所述处理器具体用于根据所述分配给所述应用请求的资源量,从所述至少两个队列中,选择所述应用请求对应的队列,并将所述应用请求保存在所述应用请求对应的队列中;
所述计算机的组成部件用于从所述应用请求对应的队列中获得并执行所述应用请求。
结合第二方面,在第二种可能的实施方式中,所述控制设备还包括高速 缓存器,所述高速缓存器中存储有所述资源分配策略。
所述处理器还用于从所述高速缓存器中获取所述资源分配策略。
结合第二方面的第二种可能的实施方式,在第三种可能的实施方式中,所述资源分配策略包括控制表,所述控制表包括多个表项,所述多个表项中的一个表项包括所述标签与分配给所述应用请求的资源量之间的对应关系;
所述处理器具体用于向所述高速缓存器发送查询指令,所述查询指令中包括所述标签;
所述高速缓存器用于根据查询指令获得所述标签对应的表项,并将所述标签对应的表项发送给所述处理器。
结合第二方面,或者第二方面的第一种至第二方面的第三种可能的实施方式,在第四种可能的实施方式中,控制设备还包括编程接口,所述编程接口用于对所述资源分配策略进行修改。
结合第二方面,或者第二方面的第一种至第二方面的第四种可能的实施方式,在第五种可能的实施方式中,所述资源分配策略是由所述计算机通过节点管理软件定义并发送给所述控制设备的,其中,所述节点管理软件存储在所述计算机的存储器中。
本发明实施例第三方面提供了一种数据处理方法,所述方法应用于控制设备中,所述控制设备设置于计算机的组成部件中;所述方法包括:
所述控制设备接收携带标签的应用请求;
所述控制设备根据所述标签以及预先保存的资源分配策略确定分配给所述应用请求的资源量,所述资源分配策略包括所述标签与分配给所述应用请求的资源量之间的对应关系;
所述控制设备指示所述计算机的组成部件根据所述分配给所述应用请求的资源量,处理所述应用请求。
结合第三方面,在第一种可能的实施方式中,所述控制设备还包括缓冲区,其中,所述缓冲区包含至少两个队列,其中,每个队列对应一定范围的 资源量,并且每个队列的优先级不同;
所述控制设备指示所述计算机的组成部件根据所述分配给所述应用请求的资源量,处理所述应用请求包括:
所述控制设备根据所述分配给所述应用请求的资源量,从所述至少两个队列中,选择所述应用请求对应的队列,并将所述应用请求保存在所述应用请求对应的队列中,使得所述计算机的组成部件从所述应用请求对应的队列中获得并执行所述应用请求。
结合第三方面,在第二种可能的实施方式中,所述控制设备还包括处理器和高速缓存器,所述高速缓存器中存储有所述资源分配策略;
所述方法还包括:所述控制设备的处理器从所述高速缓存器中获取所述资源分配策略。
结合第三方面的第二种可能的实施方式,在第三种可能的实施方式中,所述资源分配策略包括控制表,所述控制表包括多个表项,所述多个表项中的一个表项包括所述标签与分配给所述应用请求的资源量之间的对应关系;
所述控制设备的处理器从所述高速缓存器中获取所述资源分配策略包括:所述控制设备的处理器向所述高速缓存器发送查询指令,所述查询指令中包括所述标签;
所述高速缓存器根据查询指令获得所述标签对应的表项,并将所述标签对应的表项发送给所述控制设备的处理器。
本发明实施例提供了一种计算机,所述计算机包括处理单元和控制设备,其中,所述处理单元给应用请求增加标签,将增加标签后的应用请求发送给所述控制设备,所述控制设备根据标签和预先保存的资源分配策略确定分配给所述应用请求的资源量,并且指示所述计算机的组成部件根据所述分配给所述应用请求的资源量,处理所述应用请求。这就使得所述计算机的组成部件在处理所述应用请求时可以按照分配给所述应用请求的资源量进行处理,在一定程度上避免了多个应用请求互相抢占资源量,提高了服务质量。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例的一种计算机的系统架构图;
图2a是本发明实施例的另一种计算机的系统架构图;
图2b是本发明实施例的一种控制设备的结构示意图;
图3是本发明实施例的另一种控制设备的结构示意图;
图4是本发明实施例的再一种控制设备的结构示意图;
图5是本发明实施例的又一种控制设备的结构示意图;
图6是本发明实施例的控制面网络的架构示意图;
图7是本发明实施例的节点管理软件的结构示意图;
图8是本发明实施例的一种数据处理方法的流程示意图;
图9是本发明实施例的另一种数据处理方法的流程示意图;
图10是本发明实施例的再一种数据处理方法的流程示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明的一部分实施例,而不是全部实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创 造性劳动的前提下所获得的所有其他实施例,都应属于本发明保护的范围。
本发明实施例提出了一种计算机,控制设备和数据处理方法。用于提高应用程序的服务质量。
图1为本发明实施例提供的计算机10的系统架构示意图,如图1所示,该计算机10包括多个处理单元11、多个控制设备66以及多个计算机的组成部件33(图1中简称为部件)。本发明实施例所称的计算机的组成部件是指可能被多个应用程序占用资源的计算机的组成部件。
其中,处理单元11是指一个中央处理器(Central Processing Unit,CPU)上拥有的同样功能的处理器核心的其中一个,用于执行读、写等各种操作命令。
所述计算机的组成部件33包括:片上高速互连网络、与片上高速互连网络直接连接的计算机的组成部件,例如缓存(又称cache)、内存、图形处理器(Graphic Processing Unit,GPU)、显存等,还可以包括I/O互连网络以及与I/O互连网络连接的I/O设备,例如磁盘(又称硬盘),网卡,显示器等。
片上高速互连网络是用于连接多个处理单元11的连接器,在所述片上高速互连网络还连接有缓存、内存、图形处理器、显存等。
对于缓存来说,其分配给应用程序的资源可以是缓存空间;对于内存来说,其分配给应用程序的资源可以是内存空间;对于图形处理器来说,其分配给应用程序的资源可以是硬件加速资源;对于显存来说,其分配给应用程序的资源可以是显存空间。
此外,片上高速互连网络上还可以连接有I/O互连网络(又称南桥)。
I/O互连网络是用于控制I/O设备的设备。
计算机的组成部件33还包括直接与I/O互连网络连接的I/O设备,例如磁盘(又称硬盘),网卡,显示器等。
举例来说,在一段时间内,计算机10可能会处理多个应用程序,并且,这些应用程序都需要占用计算机的组成部件(例如,内存)的资源。然而内 存的资源有限,就可能会使得某些重要的应用程序得不到及时处理,影响了服务质量。
因此,在本发明实施例中,在可能被多个应用程序申请或占用资源的计算机的组成部件上设置控制设备66。控制设备66用于根据应用程序的类型的不同给应用程序分配不同的资源量,以处理该应用程序。这里的可能被多个应用程序申请或占用资源的计算机的组成部件包括但不限于:片上高速互连网络、缓存、内存、图形处理器、显存和I/O互连网络。
需要说明的是,在本发明实施例中,可以只在多个计算机的组成部件的其中一个计算机的组成部件上设置控制设备66;也可以在多个计算机的组成部件上设置控制设备66;甚至可以在前面描述的所有计算机的组成部件上设置控制设备66。
为了让控制设备66能够识别出不同类型的应用程序,就需要在应用请求(应用程序对应的请求)产生的源端识别出应用请求的类型,并打上标签,使得后续向某个计算机的组成部件上的控制设备66发送该应用请求时,控制设备66可以根据标签对类型不同的应用程序做出不同的处理。需要说明的是,在本发明实施例中,应用程序和应用请求代表的含义相同。并且,本发明实施例中的应用请求包括计算机内部产生的各种指令,以及从计算机外部接收的各种指令,例如文件访问请求、视频播放请求、内存访问请求、IO请求、内部连接(Interconnect)请求等等。
这里的应用请求产生的源端可以是处理单元11,也可以是I/O设备(例如,网卡)。当应用请求来自计算机10本地时,应用请求产生的源端是可以处理单元11;当应用请求来自计算机10外部,例如接收利用互联网发送过来的应用请求时,应用请求产生的源端可能是网卡或者其他输入输出设备。
当应用请求来自计算机10内部时,打标签的方式可以是:
在处理单元11中设置标签寄存器77(如图2a所示),标签寄存器77中保存有寄存器值。当处理单元11生成应用请求时,处理单元11通过读取寄存器 值给所述应用请求增加标签,所述标签即寄存器值。
具体的,标签是由节点管理软件(后面会详细介绍)为应用请求定义的。节点管理软件可以是操作系统中的一个模块,也可以是操作系统与计算机硬件之间的中间软件层(Hypervisor)中的一个模块,运行在处理单元11上。节点管理软件在为一个应用请求定义标签之后,操作系统可以将该标签写入该应用请求对应进程的上下文中,然后再将应用请求对应进程的上下文写入寄存器中。
一种可选的实施方式是:
由于处理单元11本身可包含多个寄存器,因此可以将多个寄存器中的其中一个寄存器设置为标签寄存器77,该标签寄存器77中用来保存应用程序的标签。当处理单元11生成一个应用请求时,到标签寄存器77中读取寄存器值,将该寄存器值作为标签写入应用请求中。可以理解的是,标签的表现形式可以是应用程序的ID,字母,数字等等,在此不做限定。
另一种可选的实施方式是:
在处理单元11中添加一个新的寄存器,并且将所述新的寄存器定义为标签寄存器77,该标签寄存器77用来保存应用程序的标签。后面的处理方式与前面一种实施方式相同,这里不再赘述。
当网卡作为应用请求的源端时,一种实施方式是网卡本身并不执行给应用请求打标签的动作。举例来说,网卡接收到应用请求时,该应用请求就是带有标签的应用请求。也就是说,应用请求的发送端可以在发送应用请求之前,给应用请求打上标签。可以理解的是,在分布式系统中,各个服务器(或者计算机)之间可以通过协商确定应用请求的标签,也可以设置一个标签服务器,用于定义并且向各个服务器发送应用请求的标签。诸如此类的实施方式都在本发明实施例的保护范围以内。另一种实施方式是网卡接收到消息包时,通过对所述消息包进行解析获得应用请求,然后给所述应用请求打上标签。这种情况下,网卡给所述应用请求打标签的方式与前面描述的处理单元 11打标签的方式类似,这里不再赘述。
需要说明的是,计算机10内的其他组成部件也可以给应用请求增加标签,例如I/O互连网络等,其实施方式与处理单元11或者网卡类似,本发明实施例并不对打标签的组成部件做任何限定。
下面以处理单元11为应用请求增加标签为例来说明后续的处理流程。
当应用请求增加标签以后,处理单元11向控制设备66发送该应用请求时,会带上该应用请求的标签。
具体的,所述控制设备66用于接收所述增加标签后的应用请求,根据所述标签以及预先保存的资源分配策略确定分配给所述应用请求的资源量,所述资源分配策略包括所述标签与分配给所述应用请求的资源量之间的对应关系;还用于指示所述计算机的组成部件根据所述分配给所述应用请求的资源量,处理所述应用请求。
本发明实施例提供了一种计算机,所述计算机包括处理单元和控制设备,其中,所述处理单元给应用请求增加标签,将增加标签后的应用请求发送给所述控制设备,所述控制设备根据标签和预先保存的资源分配策略确定分配给所述应用请求的资源量,并且指示所述计算机的组成部件根据所述分配给所述应用请求的资源量,处理所述应用请求。这就使得所述计算机的组成部件在处理所述应用请求时可以按照分配给所述应用请求的资源量进行处理,在一定程度上避免了多个应用请求互相抢占资源量,提高了服务质量。
下面重点描述控制设备66的结构及功能。
控制设备66是指计算机10内的计算机的组成部件中任一组成部件上的设备。当某些计算机的组成部件本身包含有控制器(例如,内存包含内存控制器或者网卡包含网卡控制器)时,控制设备66可以是嵌入到控制器中的控制设备或者新增的与原有的控制器连接的控制设备;当某些计算机的组成部件本身不包含控制器时,控制设备66可以是新增的连接在所述计算机的组成部件上的控制器或者控制设备。
如图2b所示,控制设备66包括处理器600a;
所述控制设备66用于接收所述增加标签后的应用请求,根据所述标签以及预先保存的资源分配策略确定分配给所述应用请求的资源量,所述资源分配策略包括所述标签与分配给所述应用请求的资源量之间的对应关系;还用于指示所述计算机的组成部件根据所述分配给所述应用请求的资源量,处理所述应用请求。
此外,控制设备66还可以包括缓冲区(又称buffer)600b。
举例来说,所述处理器600a用于将所述增加标签后的应用请求保存在缓冲区600b中;从缓冲区600b中读取所述标签,根据所述标签以及预先保存的资源分配策略确定分配给所述应用请求的资源量,所述资源分配策略包括所述标签与分配给所述应用请求的资源量之间的对应关系;指示所述计算机的组成部件根据所述分配给所述应用请求的资源量,处理所述应用请求。
需要说明的是,缓冲区600b也可以是所述处理器600a中的寄存器。在这种情况下,其处理方式可以是:所述处理器600a用于将所述增加标签后的应用请求保存在其寄存器中;从寄存器中读取所述标签,根据所述标签以及预先保存的资源分配策略确定分配给所述应用请求的资源量,所述资源分配策略包括所述标签与分配给所述应用请求的资源量之间的对应关系;指示所述计算机的组成部件根据所述分配给所述应用请求的资源量,处理所述应用请求。
举例来说,如图3所示,一种可选的实施方式是:控制设备66可以包括处理器600i、缓冲区(又称buffer)600b、队列600c。
其中,处理器600a可以是现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程器件。在处理器600i中内置有资源分配策略,所述资源分配策略可以是控制表(如表1所示),并且,所述控制表是可编辑的。
Figure PCTCN2015072672-appb-000001
Figure PCTCN2015072672-appb-000002
表1
具体的,表1中的每一个表项对应一个标签。并且,控制表的每个表项中包含有多个“属性”,“属性”代表给所述标签对应的应用请求分配的资源量。资源量可以有多种,例如,可以包括目标服务质量或者IPC(Instruction per Cycle)或者响应时间或者最大容忍度等。“属性”的值区间可以由用户设定。举例来说,值区间的定义可以是不低于30%,或者不高于80%。此外,每个表项中还包含多个“状态”,“状态”代表所述标签对应的应用请求目前消耗的资源量,“状态”的值是可以实时监控并进行更新的。
另外,所述资源分配策略也可以是一段固件代码,内置于FPGA中。本发明实施例并不对保存资源分配策略的方式做任何限定。
缓冲区600b是一个临时缓存区,当控制设备66接收带有标签的应用请求时,首先会放入缓冲区600b进行临时保存。
队列600c也是一个临时缓存区,它可以和缓冲区600b位于同一个临时缓存区中,也可以和缓冲区600b分离,单独作为一个临时缓存区。队列600c用于保存经过处理器600i处理后的应用请求。队列600c中可以包含多个队列,不同的队列在缓冲区600b中对应着不同的地址段。不同队列的优先级不同,优先级体现在所述计算机的组成部件执行所述各个队列中的应用请求的顺序不同。这意味着,给不同队列分配的资源也不同。
此外,控制设备66还可以包括编程接口600d。
编程接口600d用于实现一种地址空间映射机制,能够将处理器600i中内置的控制表映射到计算机10的物理地址空间中去。节点管理软件可以访问所述计算机10的物理地址空间,对所述控制表进行编辑。例如,编程接口600d可以提供多种函数,用于对控制表中保存的表项进行增加、修改或删除。另外, 处理器600i还可以将其控制表中每个表项中的各个“状态”的值提供给节点管理软件,使得节点管理软件对各个应用请求的“状态”的值进行统计后,进而对资源分配策略进行调整。
举例来说,处理器600i可以从缓冲区600b中获取一条带有标签的应用请求,根据所述标签在表1中查询到对应的表项,从而获得所述应用请求的“属性”。由于所述应用请求的“属性”指示了分配给所述应用请求的资源量,因此处理器600i可以根据分配给所述应用请求的资源量,从所述至少两个队列中,选择所述应用请求对应的队列,将所述应用请求放入相应的队列中。
当处理器600i所述应用请求放入相应的队列中后,可以指示相应的计算机的组成部件对应用请求做出的处理。这里的“相应的计算机的组成部件”是指该控制设备所属的计算机的组成部件。例如,如果该控制设备是指缓存上的控制设备,那么这里的“相应的计算机的组成部件”就是指缓存。
这里的指示相应的计算机的组成部件对应用请求做出的处理,可以是处理器600i从所述相应的队列中取出所述应用请求,发送给相应的计算机的组成部件,也可以是相应的计算机的组成部件从相应的队列中获得所述应用请求。
另外,控制设备66还可以包括数据转发器600j,当处理器600i将不同的应用程序放入队列600c中的不同队列后,可以从队列中取出应用请求后,发送给所述数据转发器600j,所述数据转发器600j用于再将所述应用请求转发给所述相应的计算机的组成部件。也就是说,这里的相应的计算机的组成部件从相应的队列中获得所述应用请求可以是通过数据转发器600j获得。
另外,处理器600i在将应用请求放入相应的队列600c之前,可以对应用请求做一些预处理操作,例如压缩、加密等操作,再将经过预处理操作之后的应用请求放入相应的队列600c。
如图4所示,对于控制设备66,另一种可选的实施方式是:
所述控制设备66包括缓冲区(又称buffer)600b、队列600c、微处理器600e和高速存储器(又称cache)600f。
其中,缓冲区(又称buffer)600b、队列600c和图3所示的缓存区和队列相同,在此不再赘述。
微处理器600e可以是CPU等其他具有类似CPU功能的控制器。与图3所示的处理器600a不同之处在于,处理器600i作为一种可编程器件,其中内置有资源分配策略,并且,所述资源分配策略是可编辑的。而微处理器600e执行CPU的功能,但不能在其中内置控制表。因此,图4所示的控制设备66还包含有高速缓存器600f。在所述高速缓存器600f中存储有资源分配策略,所述资源分配策略是指类似于所述控制表功能的程序代码。
举例来说,当控制设备66接收一个带有标签的应用请求时,首先将该应用请求放入缓冲区600b中。微处理器600e可以从缓冲区600b保存的应用请求队列中获取带有标签的应用请求,并且从高速缓存器600f中读取资源分配策略到缓冲区600b中,根据所述标签以及资源分配策略确定给所述应用请求分配的资源量,从所述至少两个队列中,选择所述应用请求对应的队列,将所述应用请求放入相应的队列中。微处理器600e再从所述相应的队列中取出所述应用请求,发送给相应的计算机的组成部件。
或者,控制设备66还可以包括数据转发器600j,当微处理器600e将不同的应用程序放入队列600c中的不同队列后,可以从队列中取出应用请求后,发送给所述数据转发器600j,所述数据转发器600j用于再将所述应用请求转发给所述相应的计算机的组成部件。后面的处理方式与图3所示的实施方式相同,这里不再赘述。
另外,在图4所示的控制设备66中还可以包括控制逻辑(图4中未示出),用于对高速缓存器600f中保存的资源分配策略进行修改。
同样的,微处理器600e在将应用请求放入相应的队列600c之前,可以对应用请求做一些预处理操作,例如压缩、加密等操作,再将经过预处理操作之后的应用请求放入相应的队列600c。
可以理解的是,如果微处理器600e内部有buffer,也可以将带有标签的应 用请求以及高速缓存器600f中保存的资源分配策略读到自己的buffer中,在自己的buffer中处理所述应用请求,根据处理结果将所述微处理器600e放入相应的队列600c。
对于控制设备66,再一种可选的实施方式是:
如图5所示,控制设备66可以包括缓冲区600b,比较控制逻辑600g,高速存储器600f和队列600c。
这里所称的比较控制逻辑600g可以是专用集成电路(Application Specific Integrated Circuits,ASIC)或者其他集成电路。
缓冲区600b与前面描述的缓冲区一致。
高速存储器600f中存储有控制表(表1)。
当控制设备66接收一个带有标签的应用请求时,首先将该应用请求放入应用请求的队列。队列可以是缓冲区600b中缓存空间的一部分,也可以是一个独立的缓冲区。比较控制逻辑600g从队列中读取所述应用请求到缓冲区600b(或者比较控制逻辑600g的buffer)中,并且根据所述应用请求的标签向高速存储器600f发出读取指令,要求高速存储器600f返回所述标签对应的表项。该表项的内容将被加载到缓冲区600b(或者比较控制逻辑600g的buffer)中,比较控制逻辑600g在缓冲区600b(或者比较控制逻辑600g的buffer)中根据表项的内容,从所述至少两个队列中,选择所述应用请求对应的队列,从而将所述应用请求放入队列600c中。比较控制逻辑600g再从相应的队列中取出所述应用请求,发送给相应的计算机的组成部件。
或者,控制设备66还可以包括数据转发器600j,当比较控制逻辑600g将不同的应用程序放入队列600c中的不同队列后,可以从队列中取出应用请求后,发送给所述数据转发器600j,所述数据转发器600j用于再将所述应用请求转发给所述相应的计算机的组成部件。
同样的,比较控制逻辑600g在缓冲区600b中还可以对所述应用进行一些预处理操作,例如压缩、加密等。
另外,控制设备66还可以包括编程接口600d,用于对高速缓存器600f中保存的控制表进行编辑。其具体功能可参照图3所示的实施例中对编程接口600d的描述。
需要说明的是,计算机10内的各个组成部件上的控制设备66可能不完全相同。具体而言,各个控制设备66保存的资源分配策略可能不完全相同,例如,例如对于同一个应用请求,当它需要访问内存时,内存分配给它的资源量可能达到80%,而当它需要通过I/O设备输出时,I/O互连网络分配给它的资源量可能只有70%。
利用本发明实施例提供的控制设备,可以根据应用请求的标签和预先保存的资源分配策略确定分配给所述应用请求的资源量,并且指示所述计算机的组成部件根据所述分配给所述应用请求的资源量,处理所述应用请求。这就使得所述计算机的组成部件在处理所述应用请求时可以按照分配给所述应用请求的资源量进行处理,在一定程度上避免了多个应用请求互相抢占资源量,提高了服务质量。
在本发明实施例中,各个计算机的组成部件上的控制设备66上可以形成控制设备网络,用于将所有的控制设备66连接起来,如图1或图2虚线所示。其中,每个控制设备都包含一个物理接入点,通过物理接入点接入控制设备网络。所述控制设备网络可以通过外设部件互连(Peripheral Component Interconnect Express,PCIe)协议或者其他协议传输数据,在本发明实施例中不对协议的类型做限定。
本发明实施例中还可以包含节点管理软件(图1中未示出),所述节点管理软件可以是操作系统中的一个模块,也可以是操作系统与计算机硬件之间的中间软件层(Hypervisor)中的一个模块,运行在某一个或多个处理单元11上。节点管理软件用于通过控制设备网络对所有的控制设备66进行管理,例如对各个控制设备66进行初始化操作,收集各个控制设备66的“状态”值,根据收集的“状态”值确定或者调整资源分配策略,向各个控制设备66发送 资源分配策略等操作。
可选的,如图6所示,一种实施方式是在计算机10中建立一套专用网络,利用一个根路由器将所有的控制设备66连接起来,所有的控制设备66的物理接入点都与该专用网络连接。所述专用网络可以提供一套通信协议,该通信协议负责定义访问各个控制设备66的消息包格式。举例来说,该消息包可以包括但不限于,控制设备编号或者ID、控制设备命令(例如增加资源分配策略或者删除资源分配策略或者修改资源分配策略)以及控制设备命令参数。另外,为了和所述计算机10外部的设备或者与节点管理软件进行通信,消息包也可以经过PCIe协议或者其他协议封装后再进行传输。
可选的,另一种实施方式是,各个控制设备66利用地址空间映射方式与节点管理软件进行数据交互。具体的,各个控制设备66将其控制表或者资源分配策略映射到计算机10的物理地址空间中,节点管理软件可以对这些地址空间进行访问,实现对控制表或者资源分配策略的编辑。
如图7所示,节点管理软件可以包括控制设备驱动模块701、监控管理模块702和用户编程接口703。
其中,控制设备驱动模块701用于扫描识别新的计算机的组成部件的控制设备66,并对该控制设备66进行初始化;向该控制设备66发送资源分配策略。另外,控制设备驱动模块701还用于对资源分配策略进行增加、修改或者删除。
监控管理模块702用于存储收集到的各个控制设备66的“状态”的值,对收集到的各个控制设备66的“状态”的值进行关联分析,结合用户需求确定资源分配策略。
用户编程接口703用于提供应用程序编程接口(Application Programming Interface,API),其他软件或者应用程序可以通过API实现对控制设备66的编程。举例来说,API至少包括以下接口:初始化命令、增加资源分配策略命令、修改资源分配策略命令、删除资源分配策略命令等。
本发明实施例中如图2b或图3或图4或图5所示的控制设备只是适用本 发明实施例的其中一种示例,并不是对本发明应用的具体限定,例如还可以是特定集成电路,不管哪一种形式,其在计算机中,实现控制设备的功能。本发明实施例所述的计算机,可以是个人电脑,也可以是服务器,也可以是手机,还可以是掌上电脑,本发明对计算机的具体实现形式不做限定。本申请文件对其他系统实施例或应用场景不再一一阐述。
下面介绍本发明实施例通过在计算机内设置控制设备来实现数据处理的流程,本发明实施例中的数据处理方法可以在图2b或图3或图4或图5所示的控制设备中实施,如图8所示,包括:
步骤S101:所述控制设备接收携带标签的应用请求。
所述携带标签的应用请求可能是来自图1或图2a所示的处理单元11,也可能是来自网卡。当所述携带标签的应用请求来自处理单元11时,处理单元11在接收或者生成应用请求时,需要给所述应用请求增加标签。具体的,处理单元11给所述应用请求增加标签的方式可参考图2a所示的实施例,这里不再赘述。
当携带标签的应用请求来自网卡时,一种情况是网卡接收到应用请求时,该应用请求就是带有标签的应用请求。另一种情况是网卡接收到消息包时,通过对所述消息包进行解析获得应用请求,然后给所述应用请求打上标签。
另外,在步骤S101之前,操作系统或者Hypervisor可以通过节点管理软件对计算机10内的各个控制设备66进行初始化操作,使得各个控制设备处于工作状态。控制设备66执行初始化操作以后,节点管理软件将资源分配策略通过控制设备网络发送给各个控制设备66。
步骤S102:所述控制设备根据所述标签以及预先保存的资源分配策略确定分配给所述应用请求的资源量,所述资源分配策略包括所述标签与分配给所述应用请求的资源量之间的对应关系。
具体的,所述控制设备将所述携带标签的应用请求写入所述控制设备的第一缓冲区(图2b所示的缓冲区600b),从所述第一缓冲区读取所述标签。
由于所述资源分配策略是通过所述节点管理软件发送给控制设备66的,并且,处理单元11为所述应用请求增加的标签也是通过所述节点管理软件定义的,因此所述应用请求携带的标签可以和资源分配策略中的标签关联起来,所述控制设备可以根据所述标签以及预先保存的资源分配策略确定分配给所述应用请求的资源量。
这里的资源量可以是所述控制设备所在的计算机的组成部件分配给所述应用请求的资源数量,也可以是一个比例值,还可以包括优先权信息等。例如,当所述控制设备所在的计算机的组成部件是内存时,所述资源量可以是内存空间的大小,也可以是内存空间的比例值(例如80%),还可以是其他信息(参见表1中对于“属性”的描述),本发明实施例没有对资源量的表现形式做任何限定,只要体现所述组成部件处理所述应用请求的优先级或者速度都在本发明实施例的保护范围以内。
另外,这里的资源分配策略可以内置于所述控制设备的处理器中(参考图3所示的实施方式),也可以是保存在所述控制设备的高速缓存器中(参考图3所示的实施方式)。
步骤S103:所述控制设备指示所述计算机的组成部件根据所述分配给所述应用请求的资源量,处理所述应用请求。
所述指示所述计算机的组成部件根据所述分配给所述应用请求的资源量,处理所述应用请求可以是将所述分配给所述应用请求的资源量以及所述应用请求发送给所述组成部件,也可以是将应用请求发送给所述组成部件并知会所述组成部件处理所述应用请求的方式。
本发明实施例可以由控制设备根据应用请求携带的标签,以及标签与分配给所述应用请求的资源量之间的对应关系,确定分配给所述应用请求的资源量,再指示计算机的组成部件根据所述分配给所述应用请求的资源量处理所述应用请求。因此,可以给不同的应用请求分配不同的资源量,由此提高服务质量。
可选的,当步骤S103之后所述应用请求尚未执行完毕,还需要到另一个计算机的组成部件上申请资源并执行时,可以将处理后的应用请求转发给另一个计算机的组成部件上的控制设备处理,其处理方式与步骤S101-步骤S103类似。需要说明的是,转发给另一个计算机的组成部件上的控制设备的应用请求也携带有所述标签。
下面介绍本发明实施例数据处理方法的另一种实施方式,如图9所示,所述方法包括:
步骤S201:与步骤S101相同。
步骤S202:所述控制设备从高速缓存器(可以简称为缓存)中获得资源分配策略。
当所述资源分配策略是软件代码时,所述控制设备可以将所述资源分配策略加载到其处理器的buffer中。当所述资源分配策略是如表1所示的控制表时,所述控制设备可以向所述高速缓存器发送查询指令,所述查询指令包括所述标签,所述高速缓存器根据所述标签在控制表中查找对应的表项,将所述表项返回给所述控制设备。所述控制设备将所述表项加载到其处理器的buffer中。具体的,所述表项包括所述标签与分配给所述应用请求的资源量之间的对应关系。
步骤S203:所述控制设备根据所述标签以及预先保存的资源分配策略确定分配给所述应用请求的资源量。
步骤S204:与步骤S103相同。
本发明实施例可以由控制设备根据应用请求携带的标签,以及标签与分配给所述应用请求的资源量之间的对应关系,确定分配给所述应用请求的资源量,再指示计算机的组成部件根据所述分配给所述应用请求的资源量处理所述应用请求。因此,可以给不同的应用请求分配不同的资源量,由此提高服务质量。
下面介绍本发明实施例数据处理方法的另一种实施方式,如图10所示, 所述方法包括:
步骤S301:与步骤S101相同。
步骤S302:与步骤S102相同以及与步骤S202-S203相同。
步骤S303:所述控制设备根据所述分配给所述应用请求的资源量,从所述至少两个队列中,选择所述应用请求对应的队列,并将所述应用请求保存在所述分配给所述应用请求的资源量对应的队列中。
具体的,所述控制设备可以包括第二缓冲区,所述队列保存在第二缓冲区中(如图3或者图4或者图5所示的实施方式)。其中,所述第二缓冲区包含至少两个队列,其中,每个队列对应一定范围的资源量,并且每个队列的优先级不同。
步骤S304:所述控制设备的数据转发器(例如,图3-图5中的数据转发器600j)从所述应用请求对应的队列中获得所述应用请求,并转发给所述计算机的组成部件。
步骤S305:所述计算机的组成部件获得并且执行所述应用请求。
需要说明的是,步骤S304是一个可选的步骤,所述计算机的组成部件也可以直接从相应的队列中获得所述应用请求。
举例来说,所述第二缓冲区可以包括三个队列,分别是高优先级队列、中优先级队列和低优先级队列。其中,高优先级队列对应的资源量的比例值为70%-80%。假设所述应用请求分配到的资源量是76%,那么就将所述应用请求放入高优先级队列中。高优先级队列意味着处理顺序优先,或者处理的速度较快。所述计算机的组成部件可以优先从高优先级队列中获得所述应用请求,予以执行。可选的,所述控制设备还可以参考其他因素,例如所述应用请求当前已使用的资源量,来考虑将所述应用请求放入哪个队列。在本申请文件中不一一对应用场景进行描述了。
可以理解的是,每个队列中可能都包含有多个待处理的应用请求,对于每个队列里的多个待处理的应用请求,所述计算机的组成部件可以按照先进 先出的原则或者其他原则进行处理。
另外,所述控制设备在将所述应用请求放入相应的队列之后,也可以继续由所述控制设备依次从该队列中取出应用请求(当该队列中有多个应用请求时),再发送给所述计算机的组成部件处理该应用请求。
本发明实施例可以由控制设备根据应用请求携带的标签,以及标签与分配给所述应用请求的资源量之间的对应关系,确定分配给所述应用请求的资源量,根据所述分配给所述应用请求的资源量将所述应用请求放入相应的队列中,再从所述应用请求对应的队列中获得所述应用请求发送给数据转发器,由数据转发器将所述应用请求转发给计算机的组成部件执行,由此提高了服务质量。
可选的,在图8或者图9或者图10所示的数据处理方法的实施方式中,还可以包括如下步骤:
应用请求执行完毕之后,计算机的组成部件可以向操作系统反馈消息,以说明该应用请求已处理完毕。此时,可以通过节点管理软件向控制设备发送资源回收消息,所述资源回收消息用以删除控制设备中的资源分配策略。
下面以处理一个视频播放的应用请求为例来进一步说明图8-图10的处理流程。
步骤1:用户点击本地计算机中的一个视频文件。
步骤2:CPU生成一个内存访问请求,用以读取所述视频文件。
步骤3:CPU将所述内存访问请求打上标签,并且将带有标签的内存访问请求发送给内存上的控制设备。
步骤4:内存上的控制设备根据标签确定分配给所述内存访问请求的内存空间,进而根据分配的内存空间将所述内存访问请求放入相应的队列。
步骤5:内存上的控制设备从队列中取出所述内存访问请求,发送给内存,用以执行该内存访问请求。
步骤6:内存加载该视频文件之后,给CPU发送一个响应。
步骤7:CPU从内存中获得该视频文件,并且给GPU发送硬件加速请求,要求GPU对所述视频文件进行硬件解码,所述硬件加速请求携带有所述视频文件和标签。
步骤8:GPU上的控制设备根据标签确定分配给所述视频文件的硬件加速资源量,进而根据分配的硬件加速资源量将所述硬件加速请求放入相应的队列。
步骤9:GPU上的控制设备从队列中取出所述硬件加速请求,发送给GPU。
步骤10:GPU根据所述硬件加速请求对所述视频文件进行硬件加速处理。
步骤11:GPU处理完毕之后,如果不需要再对所述视频文件进行其他处理,可以向南桥(例如,图1或图2所示的I/O控制网络)发送输出请求,所述输出请求包括经过硬件加速处理的视频文件以及标签。
步骤12:南桥上的控制设备根据标签确定分配给所述视频文件的带宽,进而根据分配的带宽将所述输出请求放入相应的队列。
步骤13:南桥从队列中取出所述视频文件,发送给显示器。
步骤14:显示器显示所述视频文件。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和 方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口、装置或单元的间接耦合或通信连接,也可以是电的,机械的或其它的形式连接。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本发明实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以是两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分,或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本发明的保护范围 之内。因此,本发明的保护范围应以权利要求的保护范围为准。

Claims (16)

  1. 一种计算机,其特征在于,所述计算机包括处理单元和控制设备;
    所述处理单元用于给应用请求增加标签,将增加标签后的应用请求发送给所述控制设备;
    所述控制设备用于接收所述增加标签后的应用请求,根据所述标签以及预先保存的资源分配策略确定分配给所述应用请求的资源量,所述资源分配策略包括所述标签与分配给所述应用请求的资源量之间的对应关系;还用于指示与所述控制设备耦合的所述计算机的其他组成部件根据所述分配给所述应用请求的资源量,处理所述应用请求。
  2. 根据权利要求1所述的计算机,其特征在于,所述控制设备还包括缓冲区,其中,所述缓冲区包含至少两个队列,其中,每个队列对应一定范围的资源量,并且每个队列的优先级不同;
    所述控制设备具体用于根据所述分配给所述应用请求的资源量,从所述至少两个队列中,选择所述应用请求对应的队列,并将所述应用请求保存在所述应用请求对应的队列中;
    所述计算机的组成部件用于按照所述每个队列的优先级分别处理所述缓冲区中的队列。
  3. 根据权利要求1所述的计算机,其特征在于,所述控制设备具体包括处理器和高速缓存器,所述高速缓存器中存储有所述资源分配策略;
    所述处理器还用于从所述高速缓存器中获取所述资源分配策略;所述处理器具体用于接收所述增加标签后的应用请求,根据所述标签以及预先保存的资源分配策略确定分配给所述应用请求的资源量,所述资源分配策略包括所述标签与分配给所述应用请求的资源量之间的对应关系;以及指示与所述控制设备耦合的所述计算机的其他组成部件根据所述分配给所述应用请求的资源量,处理所述应用请求。
  4. 根据权利要求3所述的计算机,其特征在于,所述资源分配策略包括 控制表,所述控制表包括多个表项,所述多个表项中的一个表项包括所述标签与分配给所述应用请求的资源量之间的对应关系;
    所述处理器具体用于向所述高速缓存器发送查询指令,所述查询指令中包括所述标签;
    所述高速缓存器用于根据所述查询指令获得所述标签对应的表项,并将所述标签对应的表项发送给所述控制设备的处理器。
  5. 根据权利要求3-4中任一所述的计算机,其特征在于,所述计算机还包括存储器,所述存储器中存储有节点管理软件;
    所述处理单元还用于通过所述节点管理软件定义所述资源分配策略;
    所述处理器还用于从所述节点管理软件获取所述资源分配策略,并将所述资源分配策略写入所述高速缓存器中。
  6. 根据权利要求5所述的计算机,其特征在于,所述处理单元还包括标签寄存器;
    所述处理单元还用于通过所述节点管理软件定义所述标签,并通过所述节点管理软件将所述标签写入所述标签寄存器;
    所述处理单元还用于从所述标签寄存器中读取所述标签。
  7. 根据权利要求1-6任一所述的计算机,其特征在于,所述组成部件包括片上高速互连网络、缓存、内存、图形处理器、显存、输入输出I/O互连网络、硬盘、网卡和显示器中的任意一个。
  8. 一种控制设备,其特征在于,所述控制设备包括处理器;
    所述处理器用于接收所述增加标签后的应用请求,根据所述标签以及预先保存的资源分配策略确定分配给所述应用请求的资源量,所述资源分配策略包括所述标签与分配给所述应用请求的资源量之间的对应关系;还用于指示与所述控制设备耦合的所述计算机的其他组成部件根据所述分配给所述应用请求的资源量,处理所述应用请求。
  9. 根据权利要求8所述的控制设备,其特征在于,所述控制设备还包括 缓冲区,所述缓冲区包含至少两个队列,其中,每个队列对应一定范围的资源量,并且每个队列的优先级不同;
    所述处理器具体用于根据所述分配给所述应用请求的资源量,从所述至少两个队列中,选择所述应用请求对应的队列,并将所述应用请求保存在所述应用请求对应的队列中;
    所述组成部件用于按照所述每个队列的优先级分别处理所述缓冲区中的队列。
  10. 根据权利要求8所述的控制设备,其特征在于,所述控制设备还包括高速缓存器,所述高速缓存器中存储有所述资源分配策略。
    所述处理器还用于从所述高速缓存器中获取所述资源分配策略。
  11. 根据权利要求10所述的控制设备,其特征在于,所述资源分配策略包括控制表,所述控制表包括多个表项,所述多个表项中的一个表项包括所述标签与分配给所述应用请求的资源量之间的对应关系;
    所述处理器具体用于向所述高速缓存器发送查询指令,所述查询指令中包括所述标签;
    所述高速缓存器用于根据查询指令获得所述标签对应的表项,并将所述标签对应的表项发送给所述处理器。
  12. 根据权利要求10-11任一权利要求所述的控制设备,其特征在于,所述资源分配策略是由所述计算机通过节点管理软件定义的,其中,所述节点管理软件存储在所述计算机的存储器中;
    所述处理器还用于从所述节点管理软件获取所述资源分配策略。
  13. 根据权利要求8-12任一所述的计算机,其特征在于,所述组成部件包括所述组成部件包括片上高速互连网络、缓存、内存、图形处理器、显存、输入输出I/O互连网络、硬盘、网卡和显示器中的任意一个。
  14. 一种数据处理方法,其特征在于,包括:
    所述控制设备接收携带标签的应用请求;
    所述控制设备根据所述标签以及预先保存的资源分配策略确定分配给所述应用请求的资源量,所述资源分配策略包括所述标签与分配给所述应用请求的资源量之间的对应关系;
    所述控制设备指示与所述控制设备耦合的所述计算机的其他组成部件根据所述分配给所述应用请求的资源量,处理所述应用请求。
  15. 根据权利要求14所述的方法,其特征在于,所述控制设备还包括缓冲区,其中,所述缓冲区包含至少两个队列,其中,每个队列对应一定范围的资源量,并且每个队列的优先级不同;
    所述控制设备指示与所述控制设备耦合的所述计算机的其他组成部件根据所述分配给所述应用请求的资源量,处理所述应用请求包括:
    所述控制设备根据所述分配给所述应用请求的资源量,从所述至少两个队列中,选择所述应用请求对应的队列,并将所述应用请求保存在所述应用请求对应的队列中,使得所述组成部件按照所述每个队列的优先级分别处理所述缓冲区中的队列。
  16. 根据权利要求14所述的方法,其特征在于,所述控制设备还包括处理器和高速缓存器,所述高速缓存器中存储有所述资源分配策略;
    所述方法还包括:所述处理器从所述高速缓存器中获取所述资源分配策略;
    所述控制设备根据所述标签以及预先保存的资源分配策略确定分配给所述应用请求的资源量,所述资源分配策略包括所述标签与分配给所述应用请求的资源量之间的对应关系包括:
    所述处理器根据所述标签以及预先保存的资源分配策略确定分配给所述应用请求的资源量,所述资源分配策略包括所述标签与分配给所述应用请求的资源量之间的对应关系;
    所述控制设备指示与所述控制设备耦合的所述计算机的其他组成部件根据所述分配给所述应用请求的资源量,处理所述应用请求包括:
PCT/CN2015/072672 2014-04-30 2015-02-10 计算机,控制设备和数据处理方法 WO2015165298A1 (zh)

Priority Applications (11)

Application Number Priority Date Filing Date Title
CA2935114A CA2935114C (en) 2014-04-30 2015-02-10 Computer, control device, and data processing method
JP2016553382A JP6475256B2 (ja) 2014-04-30 2015-02-10 コンピュータ、制御デバイス及びデータ処理方法
SG11201605623PA SG11201605623PA (en) 2014-04-30 2015-02-10 Computer, control device, and data processing method
EP15785332.6A EP3076296A4 (en) 2014-04-30 2015-02-10 Computer, control device and data processing method
KR1020167019031A KR101784900B1 (ko) 2014-04-30 2015-02-10 컴퓨터, 제어 장치 그리고 데이터 처리 방법
MX2016011157A MX360278B (es) 2014-04-30 2015-02-10 Computadora, dispositivo de control y metodo de procesamiento de datos.
AU2015252673A AU2015252673B2 (en) 2014-04-30 2015-02-10 Computer, control device and data processing method
BR112016016326-5A BR112016016326B1 (pt) 2014-04-30 2015-02-10 Computador, dispositivo de controle, e método de processamento de dados
RU2016134457A RU2651219C2 (ru) 2014-04-30 2015-02-10 Компьютер, устройство управления и способ обработки данных
PH12016501374A PH12016501374A1 (en) 2014-04-30 2016-07-12 Computer, control device, and data processing method
US15/335,456 US10572309B2 (en) 2014-04-30 2016-10-27 Computer system, and method for processing multiple application programs

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201410182148 2014-04-30
CN201410182148.1 2014-04-30
CN201410682375.0 2014-11-24
CN201410682375.0A CN105094983B (zh) 2014-04-30 2014-11-24 计算机,控制设备和数据处理方法

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/335,456 Continuation US10572309B2 (en) 2014-04-30 2016-10-27 Computer system, and method for processing multiple application programs

Publications (1)

Publication Number Publication Date
WO2015165298A1 true WO2015165298A1 (zh) 2015-11-05

Family

ID=54358145

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/072672 WO2015165298A1 (zh) 2014-04-30 2015-02-10 计算机,控制设备和数据处理方法

Country Status (13)

Country Link
US (1) US10572309B2 (zh)
EP (1) EP3076296A4 (zh)
JP (1) JP6475256B2 (zh)
KR (1) KR101784900B1 (zh)
CN (2) CN111666148A (zh)
AU (1) AU2015252673B2 (zh)
BR (1) BR112016016326B1 (zh)
CA (1) CA2935114C (zh)
MX (1) MX360278B (zh)
PH (1) PH12016501374A1 (zh)
RU (1) RU2651219C2 (zh)
SG (1) SG11201605623PA (zh)
WO (1) WO2015165298A1 (zh)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106980463A (zh) * 2016-01-18 2017-07-25 中兴通讯股份有限公司 存储系统的服务质量控制方法和装置
US10169239B2 (en) 2016-07-20 2019-01-01 International Business Machines Corporation Managing a prefetch queue based on priority indications of prefetch requests
US10521350B2 (en) 2016-07-20 2019-12-31 International Business Machines Corporation Determining the effectiveness of prefetch instructions
US10452395B2 (en) 2016-07-20 2019-10-22 International Business Machines Corporation Instruction to query cache residency
US10621095B2 (en) * 2016-07-20 2020-04-14 International Business Machines Corporation Processing data based on cache residency
CN108123924B (zh) * 2016-11-30 2021-02-12 中兴通讯股份有限公司 一种资源管理方法及系统
US10936490B2 (en) * 2017-06-27 2021-03-02 Intel Corporation System and method for per-agent control and quality of service of shared resources in chip multiprocessor platforms
CN109582600B (zh) * 2017-09-25 2020-12-01 华为技术有限公司 一种数据处理方法及装置
CN109726005B (zh) * 2017-10-27 2023-02-28 伊姆西Ip控股有限责任公司 用于管理资源的方法、服务器系统和计算机可读介质
CN110968418A (zh) * 2018-09-30 2020-04-07 北京忆恒创源科技有限公司 基于信号-槽的大规模有约束并发任务的调度方法与装置
CN109542622A (zh) * 2018-11-21 2019-03-29 新华三技术有限公司 一种数据处理方法及装置
US10601740B1 (en) * 2019-04-03 2020-03-24 Progressive Casuality Insurance Company Chatbot artificial intelligence
PH12019050292A1 (en) * 2019-12-22 2021-11-08 Samsung Electronics Ltd Method for scaling gpu in the cloud
CN114244790B (zh) * 2022-02-24 2022-07-12 摩尔线程智能科技(北京)有限责任公司 PCIe设备与主机设备的通信方法、系统及设备
CN114979131B (zh) * 2022-04-07 2024-04-19 中国科学院深圳先进技术研究院 面向云计算的标签化冯诺依曼体系结构通信方法及装置
US20240028420A1 (en) * 2022-07-22 2024-01-25 Dell Products L.P. Context driven network slicing based migration of applications and their dependencies

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102289385A (zh) * 2010-06-16 2011-12-21 富士施乐株式会社 信息处理系统、管理设备、处理请求设备和信息处理方法
CN102334103A (zh) * 2009-02-25 2012-01-25 国际商业机器公司 具有对于多个虚拟服务器间的共享资源分配的软件控制的微处理器

Family Cites Families (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5940612A (en) * 1995-09-27 1999-08-17 International Business Machines Corporation System and method for queuing of tasks in a multiprocessing system
US5787469A (en) 1996-09-06 1998-07-28 Intel Corporation System and method for exclusively writing tag during write allocate requests
US6195724B1 (en) * 1998-11-16 2001-02-27 Infineon Technologies Ag Methods and apparatus for prioritization of access to external devices
US7058947B1 (en) * 2000-05-02 2006-06-06 Microsoft Corporation Resource manager architecture utilizing a policy manager
US6785756B2 (en) * 2001-05-10 2004-08-31 Oracle International Corporation Methods and systems for multi-policy resource scheduling
US7096471B2 (en) * 2001-06-01 2006-08-22 Texas Instruments Incorporated Apparatus for resource management in a real-time embedded system
US20030067874A1 (en) * 2001-10-10 2003-04-10 See Michael B. Central policy based traffic management
US7661130B2 (en) * 2003-04-12 2010-02-09 Cavium Networks, Inc. Apparatus and method for allocating resources within a security processing architecture using multiple queuing mechanisms
JP4071668B2 (ja) 2003-04-16 2008-04-02 富士通株式会社 システムの使用資源を調整する装置および方法
US20050016042A1 (en) * 2003-04-28 2005-01-27 Baratta Adam M. Motion picture memorabilia and method for promoting motion pictures using same
US7430741B2 (en) 2004-01-20 2008-09-30 International Business Machines Corporation Application-aware system that dynamically partitions and allocates resources on demand
US7797699B2 (en) * 2004-09-23 2010-09-14 Intel Corporation Method and apparatus for scheduling virtual machine access to shared resources
US7356631B2 (en) * 2005-01-21 2008-04-08 Himax Technologies, Inc. Apparatus and method for scheduling requests to source device in a memory access system
US7380038B2 (en) * 2005-02-04 2008-05-27 Microsoft Corporation Priority registers for biasing access to shared resources
CN100479374C (zh) 2005-04-25 2009-04-15 华为技术有限公司 网络通信中处理紧急业务的方法
GB0524008D0 (en) * 2005-11-25 2006-01-04 Ibm Method and system for controlling the processing of requests for web resources
JP4594877B2 (ja) * 2006-02-21 2010-12-08 株式会社日立製作所 計算機リソース割当管理方法および計算機リソース割当管理装置
CN101438256B (zh) * 2006-03-07 2011-12-21 索尼株式会社 信息处理设备、信息通信系统、信息处理方法
JP2007272868A (ja) 2006-03-07 2007-10-18 Sony Corp 情報処理装置、情報通信システム、および情報処理方法、並びにコンピュータ・プログラム
JP5061544B2 (ja) * 2006-09-05 2012-10-31 トヨタ自動車株式会社 燃料電池
CN100459581C (zh) 2006-09-21 2009-02-04 电子科技大学 一种用于实时混合业务环境的可变参数分组调度方法
US8458711B2 (en) * 2006-09-25 2013-06-04 Intel Corporation Quality of service implementation for platform resources
US8065682B2 (en) * 2007-02-27 2011-11-22 Microsoft Corporation Enforcing system resource usage limits on query requests based on grouping query requests into workgroups and assigning workload groups to resource pools
US8886918B2 (en) * 2007-11-28 2014-11-11 International Business Machines Corporation Dynamic instruction execution based on transaction priority tagging
US8396929B2 (en) * 2008-07-02 2013-03-12 Sap Portals Israel Ltd. Method and apparatus for distributed application context aware transaction processing
KR20120024848A (ko) * 2009-05-26 2012-03-14 노키아 코포레이션 미디어 세션의 전달 방법 및 장치
EP2388700A3 (en) 2010-05-18 2013-08-07 Kaspersky Lab Zao Systems and methods for policy-based program configuration
US8560897B2 (en) 2010-12-07 2013-10-15 International Business Machines Corporation Hard memory array failure recovery utilizing locking structure
CN102195882B (zh) * 2011-05-18 2016-04-06 深信服网络科技(深圳)有限公司 根据数据流应用类型选路的方法及装置
CN102958166B (zh) * 2011-08-29 2017-07-21 华为技术有限公司 一种资源分配方法及资源管理平台
JP5884595B2 (ja) * 2012-03-29 2016-03-15 富士通株式会社 メッセージ通信方法,メッセージ通信プログラムおよびコンピュータ
CN102739770B (zh) * 2012-04-18 2015-06-17 上海和辰信息技术有限公司 一种基于云计算的资源调度方法及系统

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102334103A (zh) * 2009-02-25 2012-01-25 国际商业机器公司 具有对于多个虚拟服务器间的共享资源分配的软件控制的微处理器
CN102289385A (zh) * 2010-06-16 2011-12-21 富士施乐株式会社 信息处理系统、管理设备、处理请求设备和信息处理方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3076296A4 *

Also Published As

Publication number Publication date
SG11201605623PA (en) 2016-08-30
CA2935114A1 (en) 2015-11-05
BR112016016326A2 (pt) 2017-08-08
US10572309B2 (en) 2020-02-25
KR101784900B1 (ko) 2017-10-12
PH12016501374B1 (en) 2016-08-15
PH12016501374A1 (en) 2016-08-15
CA2935114C (en) 2020-07-14
AU2015252673B2 (en) 2017-12-14
RU2016134457A (ru) 2018-03-01
MX2016011157A (es) 2016-12-09
AU2015252673A1 (en) 2016-07-28
CN105094983A (zh) 2015-11-25
US20170046202A1 (en) 2017-02-16
EP3076296A4 (en) 2017-01-25
RU2016134457A3 (zh) 2018-03-01
CN111666148A (zh) 2020-09-15
CN105094983B (zh) 2020-04-28
RU2651219C2 (ru) 2018-04-18
MX360278B (es) 2018-10-26
KR20160098438A (ko) 2016-08-18
JP6475256B2 (ja) 2019-02-27
EP3076296A1 (en) 2016-10-05
BR112016016326B1 (pt) 2023-04-11
JP2017513096A (ja) 2017-05-25

Similar Documents

Publication Publication Date Title
WO2015165298A1 (zh) 计算机,控制设备和数据处理方法
US9678918B2 (en) Data processing system and data processing method
CN107690622B (zh) 实现硬件加速处理的方法、设备和系统
US9467512B2 (en) Techniques for remote client access to a storage medium coupled with a server
WO2015078219A1 (zh) 一种信息缓存方法、装置和通信设备
WO2017049945A1 (zh) 加速器虚拟化的方法、装置及集中资源管理器
US20130262614A1 (en) Writing message to controller memory space
WO2020034729A1 (zh) 数据处理方法、相关设备及计算机存储介质
US9311044B2 (en) System and method for supporting efficient buffer usage with a single external memory interface
US20240348686A1 (en) Remote Data Access Method and Apparatus
US10838763B2 (en) Network interface device and host processing device
EP4440080A1 (en) Network node configuration and access request processing method and apparatus
US8898353B1 (en) System and method for supporting virtual host bus adaptor (VHBA) over infiniband (IB) using a single external memory interface
US9104637B2 (en) System and method for managing host bus adaptor (HBA) over infiniband (IB) using a single external memory interface
US20240356886A1 (en) Network Node Configuration Method and Apparatus, and Access Request Processing Method and Apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15785332

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2935114

Country of ref document: CA

REEP Request for entry into the european phase

Ref document number: 2015785332

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015785332

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 12016501374

Country of ref document: PH

ENP Entry into the national phase

Ref document number: 20167019031

Country of ref document: KR

Kind code of ref document: A

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112016016326

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 2015252673

Country of ref document: AU

Date of ref document: 20150210

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2016553382

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2016134457

Country of ref document: RU

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: MX/A/2016/011157

Country of ref document: MX

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 112016016326

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20160714