WO2015039582A1 - 一种虚拟资源分配方法及装置 - Google Patents

一种虚拟资源分配方法及装置 Download PDF

Info

Publication number
WO2015039582A1
WO2015039582A1 PCT/CN2014/086352 CN2014086352W WO2015039582A1 WO 2015039582 A1 WO2015039582 A1 WO 2015039582A1 CN 2014086352 W CN2014086352 W CN 2014086352W WO 2015039582 A1 WO2015039582 A1 WO 2015039582A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
level thread
hardware resources
hardware
level
Prior art date
Application number
PCT/CN2014/086352
Other languages
English (en)
French (fr)
Inventor
唐士斌
唐志敏
宋风龙
叶笑春
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2015039582A1 publication Critical patent/WO2015039582A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/461Saving or restoring of program or task context

Definitions

  • the present invention relates to computers, and in particular, to a virtual resource allocation method and apparatus. Background technique
  • the state of the hardware associated with the user-level thread is saved when the process context switches.
  • This method is widely used in hardware implementation methods such as transactional memory and deterministic playback.
  • memory conflicts from different user-level threads need to be checked and recorded, so when a user-level thread switches, the hardware resources associated with the user thread need to be saved and updated.
  • transactional memory a transaction belongs to a user-level thread. When a user-level thread switches, we need to know the occurrence of the event and save the state related to the user-level thread transaction. Otherwise, the atomicity of the transaction cannot be guaranteed.
  • the existing method in the operation of saving the user-level thread-related hardware resources, This happens when the process context switches.
  • the existing method is feasible.
  • the user-level thread is implemented by the user library.
  • a lightweight process a user-level thread supported by the kernel, shares address space and process resources with other lightweight processes.
  • Lightweight processes are bound to kernel mode threads. User-level threads acquire processor resources by binding to lightweight processes.
  • Embodiments of the present invention provide a virtual resource allocation method and apparatus, which can avoid the omission of user-level thread switching in a process context switching process, and improve the accuracy of a method such as transaction memory, deterministic playback, and data competition check.
  • an embodiment of the present invention provides a virtual resource allocation method, including: when a user-level thread is suspended, the virtual resource allocation device saves the hardware resource corresponding to the user-level thread in a control data block of the user-level thread; The virtual resource allocation device stores the hardware resources corresponding to the user-level thread in the control data block of the lightweight process corresponding to the user-level thread.
  • the method further includes: the virtual resource allocation device reading a hardware resource corresponding to the user-level thread saved in a control data block of the user-level thread, And loading into the hardware corresponding to the hardware resource;
  • the virtual resource allocation device reads the hardware resource corresponding to the user-level thread saved in the control data block of the lightweight process, and loads the hardware resource corresponding to the hardware resource.
  • the virtual resource allocation apparatus saves the user-level thread corresponding hardware in a control data block of a user-level thread
  • the virtual resource allocation device adds a first data structure to the control data block of the user-level thread; and saves the user-level thread corresponding hardware resource to the first data structure
  • the allocating device reads the hardware resource corresponding to the user-level thread saved in the control data block of the user-level thread, and loads the hardware resource into the hardware corresponding to the hardware resource, specifically: the virtual resource allocation device is in the The user-level thread corresponding hardware resource is read in a data structure, and loaded into the hardware resource corresponding hardware.
  • the virtual resource allocation apparatus saves the user-level thread corresponding to the control data block of the lightweight process
  • the hardware resource includes: adding, by the virtual resource allocation device, a second data structure to the control data block of the lightweight process; saving the user-level thread corresponding hardware resource to the second data structure;
  • the virtual resource allocation device reads the hardware resource corresponding to the user-level thread saved in the control data block of the lightweight process, and loads the hardware resource into the hardware corresponding to the hardware resource, specifically: the virtual resource allocation
  • the device reads the hardware resources corresponding to the user-level thread in the second data structure, and loads the hardware resources corresponding to the hardware resources.
  • the method further includes: when the user-level thread is suspended, the virtual resource allocation device locally reads hardware resources of the user-level thread.
  • the method further includes: when the lightweight process is suspended, the virtual resource allocation device locally reads the lightweight process binding Hardware resources for all user-level threads.
  • the hardware resource includes: a scalar clock, a read set, and a write set or a vector clock corresponding to the user-level thread.
  • the embodiment of the present invention provides a virtual resource allocation apparatus, including: a first saving unit, configured to save, in a control data block of a user-level thread, a hardware resource corresponding to the user-level thread when the user-level thread is suspended ;
  • a second saving unit configured to save hardware resources corresponding to the user-level thread in a control data block of the lightweight process corresponding to the user-level thread.
  • the apparatus further includes: a first loading unit, configured to read, where the first saving unit is saved in a control data block of the user-level thread The user-level thread corresponds to the hardware resource and is loaded into the hard The hardware corresponding to the resource;
  • a second loading unit configured to read a hardware resource corresponding to the user-level thread saved by the second saving unit in a control data block of the lightweight process, and load the hardware resource corresponding to the hardware resource .
  • the first saving unit includes:
  • a first adding subunit configured to add a first data structure to the control data block of the user level thread
  • a first saving subunit configured to save the user level thread corresponding hardware resource to the first adder
  • the first loading unit is configured to: read the hardware resources corresponding to the user-level thread in the first data structure added by the first added sub-unit, and Load into the appropriate hardware.
  • the second saving unit includes:
  • a second adding subunit configured to add a second data structure to the control data block of the lightweight process
  • the device further includes: a first reading unit, configured to: when the user-level thread is suspended, the virtual resource allocation device reads the local device The hardware resources of the user-level thread.
  • the device further includes: a second reading unit, configured to: when the lightweight process is suspended, the virtual resource allocation device reads locally Hardware resources of all user-level threads bound by the lightweight process
  • the hardware resource includes: a scalar clock, a read set, and a write set or a vector clock corresponding to the user-level thread.
  • FIG. 1 is a flowchart of a virtual resource allocation method according to an embodiment of the present invention
  • FIG. 2 is a flowchart of a virtual resource allocation method according to another embodiment of the present invention
  • FIG. 4 is a schematic structural diagram of a virtual resource allocation apparatus according to another embodiment of the present invention
  • FIG. 5 is a schematic structural diagram of a virtual resource allocation apparatus according to another embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a virtual resource allocation apparatus according to still another embodiment of the present invention. detailed description
  • FIG. 1 is a schematic flowchart diagram of a virtual resource allocation method according to an embodiment of the present invention.
  • the embodiment of the present invention is applicable to a user-level thread virtual hardware resource in an operating system.
  • Source the method is typically performed by a virtual resource allocation device, which is typically a functional unit or module in a computer or computer. Referring specifically to FIG. 1, the following steps may be included:
  • Step 10 When the user-level thread is suspended, the virtual resource allocation device saves the hardware resources corresponding to the user-level thread in the control data block of the user-level thread.
  • Step 20 The virtual resource allocation device saves the hardware resource corresponding to the user-level thread in the control data block of the lightweight process corresponding to the user-level thread.
  • step 10 may also be performed before step 20 or simultaneously with step 20.
  • storing the user-level thread corresponding hardware resource in the control data block of the user-level thread means that the hardware resource corresponding to the user-level thread is saved in the user space, and the hardware resource may include: a CPU (Central Processing Unit) ), I/O (Input/Output), files, scalar clocks, vector clocks, read collections, write collections, memory, and instruction counts.
  • the hardware resource corresponding to the user-level thread is saved in the control data block of the lightweight process, and the hardware resource corresponding to the user-level thread is saved in the kernel space.
  • Case 1 In user space, user-level threads hang.
  • the hardware resources corresponding to the user-level thread are saved in the control data block of the user-level thread, that is, step 10.
  • Case 2 User-level threads hang in kernel space with the hang of lightweight processes bound to user-level threads.
  • the kernel does not know the specific information of the user-level thread. However, after the lightweight process hangs, the user-level thread will also hang, and The binding relationship between the user-level thread and the lightweight process does not change. Therefore, we store the hardware resources corresponding to the user-level thread in the control data block of the lightweight process.
  • the hardware resources corresponding to the user-level thread are saved in the control data block of the user-level thread, and the hardware resources corresponding to the user-level thread are also saved in the control data block of the lightweight process, that is,
  • user space and kernel space are simultaneously user-level thread virtual hardware resources, which can accurately identify user-level threads and accurately virtualize hardware resources for user-level threads, and improve transaction memory, data race check and deterministic playback. The accuracy.
  • FIG. 2 is a flowchart of a virtual resource allocation method according to another embodiment of the present invention. The following steps:
  • Step 30 The virtual resource allocation device reads the hardware resources corresponding to the user-level threads saved in the control data block of the user-level thread, and loads the hardware resources into the hardware corresponding to the hardware resources.
  • Step 40 The virtual resource allocation device reads the hardware resources corresponding to the user-level threads saved in the control data block of the lightweight process, and loads the hardware resources corresponding to the hardware resources.
  • Restoring the hardware resources corresponding to the user-level thread is divided into two cases:
  • step 30 the hardware resources corresponding to the user-level threads saved in the control data block of the user-level thread are read and loaded into hardware corresponding to the hardware resources.
  • the lightweight process is rescheduled in kernel space, and the user-level thread bound to it is also resumed.
  • step 40 the hardware resources corresponding to the user-level threads saved in the control data block of the lightweight process are read and loaded into hardware corresponding to the hardware resources.
  • the hardware resources corresponding to the user-level thread are restored in the control data block of the user-level thread, and the hardware corresponding to the user-level thread is also restored in the control data block of the lightweight process.
  • the resource that is, simultaneously recovering the hardware resources corresponding to the user-level thread in the user space and the kernel space, can accurately identify the user-level thread, thereby accurately recovering the corresponding hardware resources for the user-level thread.
  • the virtual resource allocation device controls data at the user level thread Saving the hardware resources corresponding to the user-level thread in the block may include the following steps:
  • Step 101 The virtual resource allocation device adds a first data structure to the control data block of the user-level thread.
  • Step 102 Save the user-level thread corresponding hardware resource to the first data structure.
  • the control data block of the user-level thread generally includes the following resources:
  • Thread ID Thread ID
  • Thread-local Storage User-level thread private storage
  • Adding a first data structure in the control data block of the user-level thread for saving hardware resources related to the user-level thread, that is, (7) the first new data structure may be different according to Applications correspond to different hardware resources. For example, in a transaction memory application, a corresponding read set and a write set; in a data race check application, a corresponding vector clock; in a deterministic playback application, a scalar clock.
  • the virtual resource allocation device saves the user-level thread corresponding hardware resources to the first data structure to save the user-level thread corresponding hardware resources in the user space.
  • the solution saves the hardware resources corresponding to the user-level threads in the user space through the first data structure.
  • the virtual resource allocation device saves the hardware resources corresponding to the user-level thread in the control data block of the lightweight process, and may include the following steps:
  • Step 201 The virtual resource allocation device adds a second data structure to the control data block of the lightweight process.
  • Step 202 Save the user-level thread corresponding hardware resource to the second data structure.
  • the control data block of the lightweight process generally includes the following resources:
  • LWP ID Lightweight Process Number
  • the second data structure is added to the control data block of the lightweight process for storing hardware resources related to the user-level thread, and the second new data structure corresponds to different hardware resources according to different applications. For example, in a transactional memory application, a corresponding read set and a write set; in a data race check application, a corresponding vector clock; in a deterministic playback application, a scalar clock.
  • the virtual resource allocation device saves the user-level thread corresponding hardware resources to the second data structure to save the user-level thread corresponding hardware resources in the kernel space.
  • the scheme saves the hardware resources corresponding to the user-level threads in the kernel space through the second data structure.
  • the solution further includes:
  • the virtual resource allocation device reads the hardware resources of the user-level thread locally.
  • the virtual resource allocation device when the suspension of the user-level thread is caused by the suspension of the lightweight process, after the user-level thread is suspended, before the step 201, the virtual resource allocation device locally reads the lightweight process binding. Hardware resources for all user-level threads.
  • the virtual resource allocation device when the user-level thread resumes scheduling execution, in the user space, the virtual resource allocation device reads the hardware resource corresponding to the user-level thread saved in the control data block of the user-level thread, and loads the hardware resource corresponding to the hardware resource,
  • the method includes: the virtual resource allocation device reads the hardware resources corresponding to the user-level threads in the first data structure, and loads the hardware resources corresponding to the hardware resources.
  • the virtual resource allocation device reads the control data block of the lightweight process
  • the hardware resource corresponding to the saved user-level thread and loaded into the hardware corresponding to the hardware resource may include: the virtual resource allocation device reads the hardware resource corresponding to the user-level thread in the second data structure, and loads the hardware corresponding to the hardware resource in.
  • the hardware resources corresponding to the user-level threads are read from the first data structure, so that the hardware resources corresponding to the user-level threads are restored in the user space, and the hardware resources corresponding to the user-level threads are read in the second data structure.
  • Restoring user-level threads in the kernel space corresponds to hardware resources, thereby more accurately maintaining user-level threads to recover hardware resources.
  • Step 1001 When the user-level thread hangs, the virtual resource allocation device reads the scalar clock of the user-level thread locally.
  • step 1001 specifically, the virtual resource allocation device locally reads the scalar clock of all user-level threads bound by the lightweight process.
  • Step 1002 Write a scalar clock of the user-level thread into the first data structure.
  • Step 1003 Write a scalar clock of the user-level thread bound to the lightweight process under the lightweight process to the second data structure.
  • Step 1002 Saving user-level thread corresponding hardware resources in the user space
  • Step Step 1003 Saving the hardware resources corresponding to the user-level threads in the kernel space. There is no chronological order for saving in user space and saving in kernel space.
  • the hardware resource to be saved is a scalar clock
  • the virtual resource allocation device reads the scalar clock of the user-level thread locally.
  • the virtual resource allocation device can read the scalar clock of the user-level thread in the scalar clock register of the local processor.
  • the value is then written to the first data structure, which can be written as Thread_ScalarClock (thread-scalar clock) to store the user-level thread corresponding hardware resources in user space.
  • the virtual resource allocation device can read the scalar clocks of all user-level threads under the lightweight process in the scalar clock registers of the local processor. The value is then written to a second data structure, which can be written as LWP-ScalarClock (Lightweight Process - Scalar Clock) to save the user-level thread corresponding hardware resources in the kernel space.
  • LWP-ScalarClock Lightweight Process - Scalar Clock
  • the virtual resource allocation device reads the hardware resources corresponding to the user-level thread in the first data structure and loads the corresponding hardware into the corresponding hardware, which may include the following steps:
  • Step 1004 The virtual resource allocation device reads the scalar clock of the user-level thread in the first data structure.
  • Step 1005 Load the scalar clock of the user-level thread into the hardware corresponding to the scalar clock.
  • the virtual resource allocation device reads the hardware resources corresponding to the user-level thread in the second data structure, and loads the corresponding hardware into the corresponding hardware, which may include the following steps:
  • Step 1006 The virtual resource allocation device reads the scalar clock of all user-level threads in the lightweight process in the second data structure.
  • Step 1007 Load the scalar clock of the user-level thread bound to the lightweight process under the lightweight process to the hardware corresponding to the scalar clock.
  • Step 1004 Step 1005 is implemented to restore a hardware resource corresponding to the user level thread in the user space.
  • Step 1006 is to restore the hardware resource corresponding to the user level thread in the kernel space. There is no chronological order in user space recovery and kernel space recovery.
  • the virtual resource allocation device may read the scalar clock value Thread_ScalarClock of the user-level thread in the first data structure in the control data block of the user-level thread, and then Write this value to the local processor's scalar clock register, which is loaded into the appropriate hardware. To realize the recovery of hardware resources corresponding to user-level threads in user space.
  • the virtual resource allocation device can read the scalar clock value LWP_ScalarClock of the user-level thread in the second data structure in the control data block of the lightweight process, and then write the value to the local processing.
  • the scalar clock registers of the device are loaded into the appropriate hardware.
  • the solution can accurately recover the scalar clock of the user-level thread and the scalar clock of all user-level threads under the lightweight process from the first data structure and the second data structure, and load the corresponding hardware into the corresponding hardware.
  • the hardware resources corresponding to the user-level threads thereby accurately implementing the deterministic playback method.
  • Step 2001 When the user-level thread hangs, the virtual resource allocation device reads the read set and the write set of the user-level thread locally;
  • step 2001 specifically, the virtual resource allocation device locally reads the read set of all user-level threads bound by the lightweight process and Write a collection.
  • Step 2002 Write a read set and a write set of the user-level thread into the first data structure.
  • Step 2003 Write a read set and a write set of the user-level thread bound to the lightweight process under the lightweight process to the second data structure.
  • Step 2002 implements saving the hardware resources corresponding to the user-level thread in the user space
  • Step 2003 implements storing the hardware resources corresponding to the user-level thread in the kernel space. There is no chronological order for saving in user space and saving in kernel space.
  • the hardware resources to be saved are read sets and The collection is written, so when the user-level thread hangs, the virtual resource allocation device reads the read and write sets of the user-level thread locally.
  • the virtual resource allocation device can read the read set and the write set of the user level thread in the read and write set registers of the local processor. The value is then written to the first data structure.
  • the read set and the write set in the first data structure can be respectively recorded as Rsetl and Wsetl to implement the storage of the hardware resources corresponding to the user-level thread in the user space.
  • the virtual resource allocation device can read the read set and the write set of all user-level threads under the lightweight process in the read and write set registers of the local processor. The value is then written to the second data structure.
  • the read set and the write set of the second data structure can be recorded as Rset2 and Wset2, respectively, to save the corresponding hardware resources of the user-level thread in the kernel space.
  • the virtual resource allocation device reads the hardware resources corresponding to the user-level thread in the first data structure and loads the corresponding hardware into the corresponding hardware, which may include the following steps:
  • Step 2004 The virtual resource allocation device reads the read set and the write set of the user-level thread in the first data structure.
  • Step 2005 Load the read set and the write set of the user-level thread into the corresponding hardware.
  • the virtual resource allocation device reads the hardware resources corresponding to the user-level thread in the second data structure, and loads the corresponding hardware into the corresponding hardware, which may include the following steps:
  • Step 2006 The virtual resource allocation device reads, in the second data structure, a read set and a write set of all user-level threads in the lightweight process;
  • Step 2007 Load the read set and the write set of the user-level thread bound to the lightweight process under the lightweight process into the corresponding hardware.
  • Steps 2004 and Steps 2005 are implemented to restore the hardware resources corresponding to the user-level threads in the user space.
  • Steps 2006 and 2007 implement the recovery of the hardware resources corresponding to the user-level threads in the kernel space. There is no chronological order in user space recovery and kernel space recovery.
  • the virtual resource allocation device can read the read set and the write set of the user-level thread in the first data structure in the control data block of the user-level thread, ie, Rsetl and Wsetl And then write the value to the local processor's read-write collection register, which is loaded into the appropriate hardware. To restore the hardware resources corresponding to the user-level threads in the user space.
  • the virtual resource allocation device can read the read set and the write set of the user-level thread, ie, Rset2 and Wset2, in the second data structure in the control data block of the lightweight process, and then the value Write to the local processor's read and write collection registers, which are loaded into the corresponding hardware. To restore the hardware resources corresponding to the user-level threads in the kernel space.
  • the solution by taking the user level from the first data structure and the second data structure respectively
  • the read collection and write collection of all the user-level threads in the read collection and the write collection and the lightweight process, and loaded into the corresponding hardware, can accurately restore the hardware resources corresponding to the user-level threads, thereby accurately implementing the transaction memory method. .
  • Step 3001 The virtual resource allocation device locally reads a vector clock of the user-level thread
  • step 3001 specifically, the virtual resource allocation device locally reads the vector clock of all user-level threads bound by the lightweight process.
  • Step 3002 Write a vector clock of the user-level thread into the first data structure.
  • Step 3003 Write a vector clock of the user-level thread bound to the lightweight process under the lightweight process to the second data structure.
  • Step 3002 is implemented to save a hardware resource corresponding to the user level thread in the user space;
  • Step 3003 is configured to save the hardware resource corresponding to the user level thread in the kernel space. There is no chronological order for saving in user space and saving in kernel space.
  • the hardware resource to be saved is a vector clock
  • the virtual resource allocation device when the user-level thread hangs, the virtual resource allocation device locally reads the vector clock of the user-level thread.
  • the virtual resource allocation device can read the vector clock of the user-level thread in the vector clock register of the local processor, and then write the value to the first data structure, and the first data structure can be recorded as Thread—VectorClock (Thread-Vector Clock) ), in order to save the user-level thread corresponding hardware resources in the user space.
  • the virtual resource allocation device can read the vector clock of all user-level threads under the lightweight process in the vector clock register of the local processor. The value is then written to a second data structure, which can be written as LWP - VectorScalarClock (lightweight process - vector clock) to hold the user-level thread corresponding hardware resources in the kernel space.
  • LWP - VectorScalarClock lightweight process - vector clock
  • the user-level thread by locally reading the vector clock of the user-level thread and the vector clock of all user-level threads under the lightweight process, respectively writing the first data structure and the second data structure, the user-level thread correspondingly can be accurately saved. Hardware resources to accurately implement data scrambling methods.
  • the virtual resource allocation device when the user-level thread re-schedules, the virtual resource allocation device reads the hardware resources corresponding to the user-level thread in the first data structure, and loads the corresponding hardware into the corresponding hardware, which may include the following steps:
  • Step 3004 When the user-level thread re-schedules, the virtual resource allocation device reads the vector clock of the user-level thread in the first data structure.
  • Step 3005 Load the vector clock of the user-level thread into the corresponding hardware.
  • the virtual resource allocation device reads the hardware resources corresponding to the user-level thread in the second data structure, and loads the corresponding hardware into the corresponding hardware, which may include the following steps:
  • Step 3006 When the lightweight process is rescheduled, the virtual resource allocation apparatus reads, in the second data structure, a vector clock of all user-level threads in the lightweight process;
  • Step 3007 Load a vector clock of the user-level thread bound to the lightweight process under the lightweight process into the corresponding hardware.
  • step 3005 is implemented to restore the hardware resources corresponding to the user level thread in the user space.
  • step 3007 is to restore the hardware resources corresponding to the user level thread in the kernel space. There is no chronological order in user space recovery and kernel space recovery.
  • the virtual resource allocation device can read the vector of the user-level thread in the first data structure in the control data block of the user-level thread, except for the operation of the prior art.
  • the clock value Thread—VectorClock is then written to the local processor's vector clock register, which is loaded into the appropriate hardware. To achieve the recovery of hardware resources corresponding to user-level threads in user space.
  • the virtual resource allocation device can read the vector clock value LWP_VectorClock of the user-level thread in the second data structure in the control data block of the lightweight process, and then Write this value to the local processor's vector clock register, which is loaded into the appropriate hardware. To restore the hardware resources corresponding to the user-level threads in the kernel space.
  • the solution can accurately recover the vector clock of the user-level thread and the vector clock of all the user-level threads in the lightweight process from the first data structure and the second data structure, and load the corresponding hardware into the corresponding hardware.
  • the hardware resources corresponding to the user-level threads so as to accurately implement the data competition method.
  • FIG. 3 is a schematic structural diagram of a virtual resource allocation apparatus according to an embodiment of the present invention.
  • the virtual resource allocation apparatus is configured to implement the virtual resource allocation method of the foregoing embodiment.
  • the virtual resource allocation apparatus includes: a first saving unit 11 and a second saving unit 21.
  • the first saving unit 11 is configured to save the hardware resources corresponding to the user-level thread in the control data block of the user-level thread when the user-level thread is suspended.
  • the second saving unit 21 is configured to save, in the control data block of the lightweight process corresponding to the user-level thread, a hardware resource corresponding to the user-level thread.
  • the apparatus may further preferably include: a first loading unit 31 and a second loading unit 41.
  • FIG. 4 is a schematic structural diagram of a virtual resource allocation apparatus according to another embodiment of the present invention.
  • the first loading unit 31 is configured to read the hardware resources corresponding to the user-level threads saved by the first saving unit 11 in the control data block of the user-level thread, and load the hardware resources into the hardware corresponding to the hardware resources.
  • the second loading unit 41 is configured to read the hardware resource corresponding to the user-level thread saved by the second saving unit 21 in the control data block of the lightweight process, and load the hardware corresponding to the hardware resource described in 'J.
  • the first saving unit 11 may include: a first adding subunit 111 and a first saving subunit 112.
  • the first adding subunit 111 is configured to add a first data structure to the control data block of the user level thread;
  • the first saving subunit 112 is configured to save the user level thread corresponding hardware resource to the first added subunit in The first data structure added;
  • the first loading unit 31 is specifically configured to:
  • the user-level thread corresponding hardware resource is read in the first data structure added by the first adding sub-unit 111, and loaded into the corresponding hardware.
  • the second saving unit 21 may include: a second adding subunit 211 and a second saving subunit 212.
  • a second adding subunit 211 configured to add a second data structure in the control data block of the lightweight process
  • a second saving subunit 212 configured to save the hardware resources corresponding to the user level thread to the first Second, the second data structure added by the subunit 211 is increased.
  • the second loading unit 41 is specifically configured to:
  • the user-level thread corresponding hardware resource is read in the second data structure added by the second added sub-unit 211, and loaded into the corresponding hardware.
  • the hardware resource corresponding to the user-level thread is saved in the control data block of the user-level thread by the first saving unit, and is also saved in the control data block of the lightweight process by the second saving unit.
  • the hardware resources corresponding to the user-level thread that is, the user-level thread virtual hardware resources in the user space and the kernel space at the same time, can accurately identify the user-level thread, and accurately serve the user-level thread virtual hardware resources, improve the transaction memory and data. The accuracy of methods such as scrambling and deterministic replay.
  • FIG. 5 is a schematic structural diagram of a virtual resource allocation apparatus according to another embodiment of the present invention.
  • the virtual resource allocation apparatus further includes: a first reading unit 51a, configured to locally read hardware resources of the user-level thread when the user-level thread is suspended.
  • the second reading unit 51b is configured to locally read hardware resources of all user-level threads bound by the lightweight process when the lightweight process hangs.
  • the hardware resources include: a scalar clock, a read set, and a write set or a vector clock corresponding to the user-level thread.
  • FIG. 6 is a schematic structural diagram of a virtual resource allocation apparatus according to still another embodiment of the present invention.
  • the virtual resource allocation device is used to implement the virtual resource allocation device method provided by the foregoing method embodiment.
  • the virtual resource allocation device may be a functional entity on a computer or a computer, and includes at least one processor 61, a memory 62, and a bus 63.
  • the bus 63 is used to implement connection and communication between the processor 61 and the memory 62, and the memory 62 is used to store program codes and data executed by the processor 61.
  • the bus 63 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component (PCI) bus, or an Extended Industry Standard Architecture (EISA). Bus, etc., is not limited here.
  • the bus 63 can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in Figure 6, but it does not mean that there is only one bus or one type of bus. among them:
  • the memory 62 is used to store data or executable program code, where the program code includes computer operation instructions, which may specifically be: an operating system, an application, or the like.
  • the memory 62 may include a high speed RAM memory, and may also include a non-volatile memory, for example, at least one disk memory.
  • the processor 61 may be a central processing unit (CPU), or an application specific integrated circuit (ASIC), or one or more configured to implement the embodiments of the present invention. integrated circuit.
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • the processor 61 is configured to implement the virtual resource allocation method in the foregoing embodiment by executing the program code in the memory 62, and specifically includes:
  • the hardware resources corresponding to the user-level thread are saved in the control data block of the user-level thread;
  • the hardware resources corresponding to the user-level thread are saved in the control data block of the lightweight process corresponding to the user-level thread.
  • the hardware resources corresponding to the user-level thread are saved in the control data block of the user-level thread, and the hardware resources corresponding to the user-level thread are also saved in the control data block of the lightweight process, that is,
  • user space and kernel space are simultaneously user-level thread virtual hardware resources, which can accurately identify user-level threads and accurately virtualize hardware resources for user-level threads, and improve transaction memory, data race check and deterministic playback. The accuracy.
  • the processor 61 is also used to:
  • the hardware resources corresponding to the user-level thread are restored in the control data block of the user-level thread, and the hardware corresponding to the user-level thread is also restored in the control data block of the lightweight process.
  • the resource that is, simultaneously recovering the hardware resources corresponding to the user-level thread in the user space and the kernel space, can accurately identify the user-level thread, thereby accurately recovering the corresponding hardware resources for the user-level thread.
  • the processor 61 is specifically configured to: Adding a first data structure to the control data block of the user-level thread; saving the corresponding hardware resource of the user-level thread to the first data structure;
  • the hardware resources corresponding to the user-level threads are read in the first data structure and loaded into hardware corresponding hardware.
  • processor 61 is specifically configured to:
  • the hardware resources corresponding to the user-level threads are read in the second data structure and loaded into hardware corresponding to the hardware resources.
  • the hardware resources corresponding to the user-level threads are read from the first data structure, so that the hardware resources corresponding to the user-level threads are restored in the user space, and the hardware resources corresponding to the user-level threads are read in the second data structure.
  • Restoring user-level threads in the kernel space corresponds to hardware resources, thereby more accurately maintaining user-level threads to recover hardware resources.
  • Further processor 61 is further configured to: when the user-level thread hangs, read the hardware resources of the user-level thread locally;
  • the hardware resources of all user-level threads bound by the lightweight process are read locally.
  • the hardware resources include: a scalar clock, a read set, and a write set or a vector clock corresponding to the user-level thread.
  • the user-level thread by locally reading the vector clock of the user-level thread and the vector clock of all user-level threads under the lightweight process, respectively writing the first data structure and the second data structure, the user-level thread correspondingly can be accurately saved.
  • Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another.
  • a storage medium may be any available media that can be accessed by a computer. Taking this as an example, but not limited to: computer readable media can Including RAM, ROM, EEPROM, CD-ROM or other optical disk storage, disk storage media or other magnetic storage device, or any capable of carrying or storing desired program code in the form of an instruction or data structure and accessible by a computer Other media. Also.
  • any connection may suitably be a computer readable medium.
  • the software is transmitted from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave
  • coaxial cable , fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, wireless, and microwaves are included in the fixing of the associated media.
  • a disk and a disc include a compact disc (CD), a laser disc, a disc, a digital versatile disc (DVD), a floppy disc, and a Blu-ray disc, wherein the disc is usually magnetically copied, and the disc is The laser is used to optically replicate the data. Combinations of the above should also be included within the scope of the computer readable media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

本发明实施例公开一种虚拟资源分配方法及装置,应用于计算机领域,能够避免进程上下文切换过程中对用户级线程的切换遗漏。该方法包括:在用户级线程挂起时,虚拟资源分配装置在用户级线程的控制数据块中保存所述用户级线程对应硬件资源;所述虚拟资源分配装置在所述用户级线程对应的轻量级进程的控制数据块中保存所述用户级线程对应的硬件资源。本发明的实施例应用于虚拟资源分配。

Description

一种虚拟资源分配方法及装置 本申请要求于 2013年 9月 22 日提交中国专利局、 申请号为 201310444885. X、发明 名称为 "一种虚拟资源分配方法及装置" 的中国专利申请的优先权, 其全部内容通过引 用结合在本申请中。
技术领域 本发明涉及计算机, 尤其涉及一种虚拟资源分配方法及装置。 背景技术
在进程上下文切换时,保存用户级线程的相关硬件状态。 该方法 被广泛的应用于事务性内存和确定性重放等硬件实现方法中。在上述 应用中, 需要区分不同用户级线程。 例如在确定性重放中, 需要检查 来自不同用户级线程的访存冲突并将其记录, 因此, 在用户级线程发 生切换时, 需要保存并更新与用户线程相关的硬件资源。再例如在事 务内存中,一个事务是属于一个用户级线程的, 在用户级线程发生切 换时, 我们需要得知该事件的发生, 并且保存该用户级线程事务相关 的状态。 否则, 无法保证事务的原子性。 基于共享存储的多线程模型中的 1-on-l 模型中, 用户级线程与 轻量级进程之间是——对应的关系, 现有的方法, 在保存用户级线程 相关硬件资源的操作, 是在进程上下文切换时发生的, 在 1 -on-l 的 模型中, 现有的方法是可行的。 但是, 在基于共享存储的多线程模型中的 M-on-N模型中, 用 户级线程与轻量级进程之间存在多种对应关系。 其中, 用户级线程, 是通过用户库实现的。 轻量级进程, 是内核支持的用户级线程, 与其 它轻量级进程共享地址空间与进程资源。轻量级进程与内核态线程绑 定。 用户级线程通过与轻量级进程绑定获取处理器资源。 内核不知道 用户级线程的存在, 用户级线程的切换可能发生在用户空间。 而该切 换的发生, 内核无法得知, 因此, 在进程上下文切换时保存与用户级 线程相关硬件资源的方法不够精确, 会漏掉在用户级线程的切换, 给 事务内存、 确定性重放及数据竟争检查等方法带来严重的准确性隐 患。 发明内容 本发明实施例提供一种虚拟资源分配方法及装置, 能够避免进 程上下文切换过程中对用户级线程的切换遗漏,提高事务内存、确定 性重放及数据竟争检查等方法的准确性。 第一方面, 本发明实施例提供一种虚拟资源分配方法, 包括: 在用户级线程挂起时,虚拟资源分配装置在用户级线程的控制数 据块中保存所述用户级线程对应硬件资源; 所述虚拟资源分配装置在所述用户级线程对应的轻量级进程的 控制数据块中保存所述用户级线程对应的硬件资源。 结合第一方面, 在第一种可能的实现方式中, 所述方法还包括: 所述虚拟资源分配装置读取所述用户级线程的控制数据块中保 存的所述用户级线程对应硬件资源,并加载到所述硬件资源对应的硬 件中;
所述虚拟资源分配装置读取所述轻量级进程的控制数据块中保 存的所述用户级线程对应的硬件资源,并加载到所述硬件资源对应的 硬件中。 结合第一方面和第一方面的第一种可能的实现方式,在第二种可 能的实现方式中,所述虚拟资源分配装置在用户级线程的控制数据块 中保存所述用户级线程对应硬件资源, 具体包括: 所述虚拟资源分配装置在所述用户级线程的控制数据块中增加 第一数据结构; 将所述用户级线程对应硬件资源保存至所述第一数据结构; 所述虚拟资源分配装置读取所述用户级线程的控制数据块中保 存的所述用户级线程对应硬件资源,并加载到所述硬件资源对应的硬 件中, 具体包括: 所述虚拟资源分配装置在所述第一数据结构中读取所述用户级 线程对应硬件资源, 并加载到所述硬件资源对应硬件中。 结合第一方面和第一方面的第一种可能的实现方式,在第三种可 能的实现方式中,所述虚拟资源分配装置在轻量级进程的控制数据块 中保存所述用户级线程对应的硬件资源, 具体包括: 所述虚拟资源分配装置在所述轻量级进程的控制数据块中增加 第二数据结构; 将所述用户级线程对应硬件资源保存至所述第二数据结构; 所述虚拟资源分配装置读取所述轻量级进程的控制数据块中保 存的所述用户级线程对应的硬件资源,并加载到所述硬件资源对应的 硬件中, 具体包括: 所述虚拟资源分配装置在所述第二数据结构中读取所述用户级 线程对应硬件资源, 并加载到所述硬件资源对应的硬件中。 结合第一方面, 在第四种可能的实现方式中, 所述方法还包括: 当所述用户级线程挂起时,所述虚拟资源分配装置在本地读取所 述用户级线程的硬件资源。 结合第一方面, 在第五种可能的实现方式中, 所述方法还包括: 当所述轻量级进程挂起时,所述虚拟资源分配装置在本地读取所 述轻量级进程绑定的所有用户级线程的硬件资源。 结合第一方面或第一方面任——种可能的实现方式,在第六种可 能的实现方式中, 所述硬件资源包括: 用户级线程对应的标量时钟、 读集合和写集合或向量时钟。 第二方面, 本发明实施例提供一种虚拟资源分配装置, 包括: 第一保存单元,用于在用户级线程挂起时在用户级线程的控制数 据块中保存所述用户级线程对应硬件资源;
第二保存单元,用于在所述用户级线程对应的轻量级进程的控制 数据块中保存所述用户级线程对应的硬件资源。
结合第二方面, 在第一种可能的实现方式中, 所述装置还包括: 第一加载单元,用于读取所述第一保存单元在所述用户级线程的 控制数据块中保存的所述用户级线程对应硬件资源,并加载到所述硬 件资源对应的硬件中;
第二加载单元,用于读取所述第二保存单元在所述轻量级进程的 控制数据块中保存的所述用户级线程对应的硬件资源,并加载到所述 硬件资源对应的硬件中。
结合第二方面和第二方面的第一种可能的实现方式,在第二种可 能的实现方式中, 所述第一保存单元包括:
第一增加子单元,用于在所述用户级线程的控制数据块中增加第 一数据结构; 第一保存子单元,用于将所述用户级线程对应硬件资源保存至所 述第一增加子单元增加的所述第一数据结构; 所述第一加载单元, 具体用于: 在所述第一增加子单元增加的所述第一数据结构中读取所述用 户级线程对应硬件资源, 并加载到相应的硬件中。
结合第二方面和第一方面的第一种可能的实现方式,在第三种可 能的实现方式中, 所述第二保存单元包括:
第二增加子单元,用于在所述轻量级进程的控制数据块中增加第 二数据结构;
第二保存子单元,用于将所述用户级线程对应硬件资源保存至所 述第二增加子单元增加的所述第二数据结构; 所述第二加载单元, 具体用于: 在所述第二增加子单元增加的所述第二数据结构中读取所述用 户级线程对应硬件资源, 并加载到相应的硬件中。 结合第二方面, 在第四种可能的实现方式中, 所述装置还包括: 第一读取单元, 用于当所述用户级线程挂起时, 所述虚拟资源分 配装置在本地读取所述用户级线程的硬件资源。 结合第二方面, 在第五种可能的实现方式中, 所述装置还包括: 第二读取单元, 用于当所述轻量级进程挂起时, 所述虚拟资源分 配装置在本地读取所述轻量级进程绑定的所有用户级线程的硬件资 结合第二方面或第二方面的任意一种可能的实现方式,所述硬件 资源包括:用户级线程对应的标量时钟、读集合和写集合或向量时钟。 本发明实施例提供的虚拟资源分配方法及装置,通过在用户空间 和内核空间同时为用户级线程虚拟硬件资源,能够准确识别用户级线 程, 并准确的为用户级线程虚拟硬件资源, 能够避免进程上下文切换 过程中对用户级线程的切换遗漏,提高事务内存、确定性重放及数据 竟争检查等方法的准确性。 附图说明 为了更清楚地说明本发明实施例或现有技术中的技术方案,下面 将对实施例或现有技术描述中所需要使用的附图作简单地介绍。 图 1 为本发明的实施例提供的一种虚拟资源分配方法的流程图 示意图; 图 2 为为本发明的又一实施例提供的一种虚拟资源分配方法的 流程图示意图; 图 3 为本发明实施例提供的一种虚拟资源分配装置的结构示意 图;
图 4 为本发明又一实施例提供的一种虚拟资源分配装置的结构 示意图; 图 5 为本发明另一实施例提供的一种虚拟资源分配装置的结构 示意图;
图 6 为本发明再一实施例提供的一种虚拟资源分配装置的结构 示意图。 具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方 案进行清楚、 完整地描述, 显然, 所描述的实施例仅仅是本发明一部 分实施例, 而不是全部的实施例。
图 1 为本发明的实施例提供的一种虚拟资源分配方法的流程图 示意图。 本发明实施例适用于操作系统中为用户级线程虚拟硬件资 源, 该方法通常由虚拟资源分配装置执行, 该虚拟资源分配装置一般 为计算机或计算机中的功能单元或模块。 具体参考图 1 , 可以包括以 下步骤:
步骤 10、 在用户级线程挂起时, 虚拟资源分配装置在用户级线 程的控制数据块中保存用户级线程对应硬件资源。
步骤 20、 虚拟资源分配装置在用户级线程对应的轻量级进程的 控制数据块中保存用户级线程对应的硬件资源。
以上步骤的序号并不代表各步骤之间时序的先后顺序,只是为了 在实现上交代清楚各步骤的不同, 例如步骤 10 也可以先于步骤 20 执行或与步骤 20同时执行。
在 M-on-N的多线程实现模型中, 用户级线程的切换可能发生在 用户空间, 因此, 除了需要在内核空间为用户级线程虚拟硬件资源还 需要在用户空间为用户级线程虚拟硬件资源。 当然本方案也适用于 1-on-l 的多线程实现模型, 在 1 -on-l 的多线程实现模型中, 由于用 户级线程都是与轻量级进程绑定的,而轻量级进程是内核调度的基本 单元, 因此, 只需要内核空间为用户级线程虚拟硬件资源。
步骤 10中, 在用户级线程的控制数据块中保存用户级线程对应 硬件资源是指, 是指在用户空间保存用户级线程对应硬件资源,硬件 资源可以包括: CPU ( Central Processing Unit, 中央处理器) 、 I/O ( Input/Output,输入 /输出) 、 文件、 标量时钟、 向量时钟、 读集合、 写集合、 内存及指令数等资源。 步骤 20中, 在轻量级进程的控制数 据块中保存用户级线程对应的硬件资源,是指在内核空间保存用户级 线程对应硬件资源。
用户级线程挂起分为两种情况:
第一种情况: 在用户空间中, 用户级线程挂起。
将用户级线程对应的硬件资源保存在用户级线程的控制数据块 中, 即步骤 10。
第二种情况:在内核空间中随着与用户级线程绑定的轻量级进程 的挂起, 用户级线程挂起。
因为轻量级的进程挂起发生在内核态,内核不知道用户级线程的 具体信息。 但是, 轻量级进程挂起以后, 用户级线程也会挂起, 并且 用户级线程与轻量级进程的绑定关系不会变化, 因此, 我们将该用户 级线程对应的硬件资源保存在轻量级进程的控制数据块中, 即步骤
20。
本方案, 用户级线程挂起时, 通过既在用户级线程的控制数据块 中保存用户级线程对应硬件资源,也在轻量级进程的控制数据块中保 存用户级线程对应的硬件资源,即同时在用户空间和内核空间同时为 用户级线程虚拟硬件资源, 可以准确识别用户级线程, 并准确的为用 户级线程虚拟硬件资源, 提高了事务内存、数据竟争检查和确定性重 放等方法的准确性。
当用户级线程重新调度时,需要恢复该用户级线程对应的硬件资 源, 参考图 2 , 图 2为本发明的又一实施例提供的一种虚拟资源分配 方法的流程图示意图, 本方案可以包括以下步骤:
步骤 30、 虚拟资源分配装置读取用户级线程的控制数据块中保 存的用户级线程对应硬件资源, 并加载到硬件资源对应的硬件中。
步骤 40、 虚拟资源分配装置读取轻量级进程的控制数据块中保 存的用户级线程对应的硬件资源, 并加载到硬件资源对应的硬件中。
恢复该用户级线程对应的硬件资源, 分为两种情况:
第一种情况, 在用户空间对用户级线程重新调度。
具体包括步骤 30中, 读取用户级线程的控制数据块中保存的用 户级线程对应硬件资源, 并加载到硬件资源对应的硬件中。
第二种情况, 在内核空间对轻量级进程重新调度, 此时与之绑定 的用户级线程也会恢复调度执行。
具体包括步骤 40中, 读取轻量级进程的控制数据块中保存的用 户级线程对应的硬件资源, 并加载到硬件资源对应的硬件中。
本方案, 在用户级线程重新调度时, 通过既在在用户级线程的控 制数据块中恢复用户级线程对应硬件资源,也在在轻量级进程的控制 数据块中恢复用户级线程对应的硬件资源,即同时在在用户空间和内 核空间同时恢复用户级线程对应的硬件资源,可以准确识别用户级线 程, 从而准确的为用户级线程恢复对应的硬件资源。
在上述方案基础上,虚拟资源分配装置在用户级线程的控制数据 块中保存用户级线程对应硬件资源, 可以包括以下步骤:
步骤 101、虚拟资源分配装置在用户级线程的控制数据块中增加 第一数据结构;
步骤 102、 将用户级线程对应硬件资源保存至第一数据结构。 其中, 用户级线程的控制数据块中一般包括以下资源:
( 1 ) 线程号 (Thread ID )
( 2 ) 寄存器状态 (指令指针 PC与堆栈指针 SP )
( 3 ) 堆栈 ( Stack )
( 4 ) 信号屏蔽位 ( Signal Mask )
( 5 ) 优先级 ( Priority )
( 6 ) 用户级线程私有存储 ( Thread-local Storage ) 。
在用户级线程的控制数据块中增加第一数据结构,用于保存与用 户级线程相关的硬件资源, 即 (7 ) 第一新增数据结构, 该第一新增 数据结构, 可以根据不同的应用分别对应不同的硬件资源。 例如在事 务内存应用中, 对应读集合和写集合; 在数据竟争检查应用中, 对应 向量时钟; 在确定性重放应用中, 对应标量时钟。
虚拟资源分配装置将用户级线程对应硬件资源保存至第一数据 结构中, 以在用户空间保存用户级线程对应硬件资源。
本方案通过第一数据结构,在用户空间保存用户级线程对应硬件 资源。
在上述方案基础上,虚拟资源分配装置在轻量级进程的控制数据 块中保存用户级线程对应的硬件资源, 可以包括以下步骤:
步骤 201、虚拟资源分配装置在轻量级进程的控制数据块中增加 第二数据结构;
步骤 202、将所述用户级线程对应硬件资源保存至第二数据结构。 其中, 轻量级进程的控制数据块中一般包括以下资源:
( 1 ) 轻量级进程号 ( LWP ID )
( 2 ) 寄存器状态 (指令指针 PC与堆栈指针 SP )
( 3 ) 信号屏蔽掩码 Signal Mask ( 4 ) 可选的信号堆栈与不同堆栈的开关掩码 (Alternate signal stack and masks for alternate stack disable and onstack )
( 5 ) 用户虚拟时间才艮警 ( User and user+system virtual time alarms )
( 6 ) 用户态与系统态 CPU利用率 ( User time and system CPU usage )
( 7 ) 性能状态 ( Profiling state )
( 8 ) 调度优先级与分类 ( Scheduling class and priority )
( 9 ) 第二数据结构 ( New Feature hardware resource )
其中, 在轻量级进程的控制数据块中增加 (9 ) 第二数据结构, 用于保存与用户级线程相关的硬件资源,根据不同的应用该第二新增 数据结构对应不同的硬件资源。 例如在事务内存应用中, 对应读集合 和写集合; 在数据竟争检查应用中, 对应向量时钟; 在确定性重放应 用中, 对应标量时钟。
虚拟资源分配装置将用户级线程对应硬件资源保存至第二数据 结构中, 以在内核空间保存用户级线程对应硬件资源。
本方案通过第二数据结构,在内核空间保存用户级线程对应硬件 资源。
进一步可选的, 在步骤 10之前, 该方案还包括:
当用户级线程挂起时,虚拟资源分配装置在本地读取用户级线程 的硬件资源。
可选的, 当用户级线程的挂起是由于轻量级进程挂起引起时, 用 户级线程挂起之后在步骤 201之前还包括:虚拟资源分配装置在本地 读取轻量级进程绑定的所有用户级线程的硬件资源。
相应地, 当用户级线程恢复调度执行时, 在用户空间, 虚拟资源 分配装置读取用户级线程的控制数据块中保存的用户级线程对应硬 件资源, 并加载到硬件资源对应的硬件中, 可以包括: 虚拟资源分配 装置在第一数据结构中读取用户级线程对应硬件资源,并加载到硬件 资源对应的硬件中。
在内核空间,虚拟资源分配装置读取轻量级进程的控制数据块中 保存的用户级线程对应的硬件资源, 并加载到硬件资源对应的硬件 中, 可以包括: 虚拟资源分配装置在第二数据结构中读取用户级线程 对应硬件资源 , 并加载到硬件资源对应的硬件中。
本方案, 通过从第一数据结构中读取用户级线程对应硬件资源, 以实现在用户空间恢复用户级线程对应硬件资源,通过在第二数据结 构中读取用户级线程对应硬件资源,以实现在内核空间恢复用户级线 程对应硬件资源, 从而更准确地位用户级线程恢复硬件资源。
具体地, 在确定性重放应用中, 在上述方案基础上, 当用户级线 程挂起时包括以下步骤:
步骤 1001、 当用户级线程挂起时, 虚拟资源分配装置在本地读 取用户级线程的标量时钟。
可选的, 当用户级线程的挂起是由于轻量级进程挂起引起时, 步 骤 1001 : 具体为虚拟资源分配装置在本地读取轻量级进程绑定的所 有用户级线程的标量时钟。
步骤 1002、 将用户级线程的标量时钟写入第一数据结构。
步骤 1003、 将轻量级进程下与所述轻量级进程绑定的用户级线 程的标量时钟写入第二数据结构。
步骤 1002实现在用户空间保存用户级线程对应硬件资源; 步骤 步骤 1003实现在内核空间保存用户级线程对应硬件资源。 在用户空 间保存和在内核空间保存没有先后时间顺序。
其中, 确定性重放中, 为维护时间上下顺序关系, 需要保存的硬 件资源为标量时钟, 因此当用户级线程挂起时, 虚拟资源分配装置在 本地读取用户级线程的标量时钟。虚拟资源分配装置可以在本地处理 器的标量时钟寄存器中读取用户级线程的标量时钟。然后将该值写入 第一数据结构, 该第一数据结构可以记为 Thread— ScalarClock (线程— 标量时钟), 以实现在用户空间保存用户级线程对应硬件资源。
当轻量级进程挂起时,虚拟资源分配装置可以在本地处理器的标 量时钟寄存器中读取轻量级进程下所有用户级线程的标量时钟。然后 将该值写入第 二数据结构 , 该第 二数据结构可 以记为 LWP— ScalarClock (轻量级进程—标量时钟), 以实现在内核空间保存 用户级线程对应硬件资源。 本方案,通过在本地读取用户级线程的标量时钟和轻量级进程下 所有用户级线程的标量时钟, 分别写入第一数据结构和第二数据结 构, 可以准确地保存用户级线程对应的硬件资源, 从而准确地实现事 务' 1·生重放方法。
相应地, 在上述方案基础上, 当用户级线程重新调度时, 虚拟资 源分配装置在第一数据结构中读取用户级线程对应硬件资源,并加载 到相应的硬件中, 可以包括以下步骤:
步骤 1004、 虚拟资源分配装置在第一数据结构中读取用户级线 程的标量时钟。
步骤 1005、 将用户级线程的标量时钟加载到标量时钟对应的的 硬件中。
当轻量级进程重新调度时,虚拟资源分配装置在第二数据结构中 读取用户级线程对应硬件资源, 并加载到相应的硬件中, 可以包括以 下步骤:
步骤 1006、 虚拟资源分配装置在第二数据结构中读取轻量级进 程下所有用户级线程的标量时钟。
步骤 1007、 将轻量级进程下与轻量级进程绑定的用户级线程的 标量时钟加载到标量时钟对应的硬件中。
步骤 1004、 步骤 1005实现在用户空间恢复用户级线程对应硬件 资源; 步骤 1006、 步骤 1007实现在内核空间恢复用户级线程对应硬 件资源。 在用户空间恢复和在内核空间恢复没有先后时间顺序。
其中, 在确定性重放中, 当用户级线程重新调度时, 虚拟资源分 配装置可以在用户级线程的控制数据块中的第一数据结构读取用户 级线程的标量时钟值 Thread— ScalarClock,然后将该值写入本地处理器 的标量时钟寄存器, 即加载到相应的硬件中。 以实现在用户空间恢复 用户级线程对应的硬件资源。
当轻量级进程重新调度时,虚拟资源分配装置可以在轻量级进程 的控制数据块中的第二数据结构读取用户级线程的标量时钟值 LWP— ScalarClock, 然后将该值写入本地处理器的标量时钟寄存器, 即 加载到相应的硬件中。以实现在内核空间恢复用户级线程对应的硬件 资源。 本方案, 通过从第一数据结构和第二数据结构中, 分别读取用户 级线程的标量时钟和轻量级进程下所有用户级线程的标量时钟,并加 载到相应的硬件, 可以准确地恢复用户级线程对应的硬件资源, 从而 准确地实现确定性重放方法。
具体地, 在事务内存应用中, 在上述方案基础上, 当用户级线程 挂起时, 包括以下步骤:
步骤 2001、 当用户级线程挂起时, 虚拟资源分配装置在本地读 取用户级线程的读集合和写集合;
可选的, 当用户级线程的挂起是由于轻量级进程挂起引起时, 步 骤 2001 : 具体为虚拟资源分配装置在本地读取轻量级进程绑定的所 有用户级线程的读集合和写集合。
步骤 2002、 将用户级线程的读集合和写集合写入第一数据结构。 步骤 2003、 将轻量级进程下与轻量级进程绑定的用户级线程的 读集合和写集合写入第二数据结构。
步骤 2002实现在用户空间保存用户级线程对应硬件资源; 步骤 2003 实现在内核空间保存用户级线程对应硬件资源。 在用户空间保 存和在内核空间保存没有先后时间顺序。
其中, 事务内存中, 为保证一个事务在存储层次上是原子执行, 即,没有来自其它用户级线程的访存操作与事务内的访存操作发生数 据交互, 需要保存的硬件资源为读集合和写集合, 因此当用户级线程 挂起时, 虚拟资源分配装置在本地读取用户级线程的读集合和写集 合。虚拟资源分配装置可以在本地处理器的读写集合寄存器中读取用 户级线程的读集合和写集合。 然后将该值写入第一数据结构, 该第一 数据结构中的读集合和写集合可以分别记为 Rsetl 和 Wsetl , 以实现 在用户空间保存用户级线程对应硬件资源。
当轻量级进程挂起时,虚拟资源分配装置可以在本地处理器的读 写集合寄存器中读取轻量级进程下所有用户级线程的读集合和写集 合。 然后将该值写入第二数据结构, 该第二数据结构的读集合和写集 合可以分别记为 Rset2和 Wset2, 以实现在内核空间保存用户级线程 对应硬件资源。
本方案,通过在本地读取用户级线程的读集合和写集合和轻量级 进程下所有用户级线程的读集合和写集合,分别写入第一数据结构和 第二数据结构, 可以准确地保存用户级线程对应的硬件资源, 从而准 确地实现事务内存方法。
相应地, 在上述方案基础上, 当用户级线程重新调度时, 虚拟资 源分配装置在第一数据结构中读取用户级线程对应硬件资源,并加载 到相应的硬件中, 可以包括以下步骤:
步骤 2004、 虚拟资源分配装置在第一数据结构中读取用户级线 程的读集合和写集合; 步骤 2005、 将用户级线程的读集合和写集合加载到相应的硬件 中。
当轻量级进程重新调度时,虚拟资源分配装置在第二数据结构中 读取用户级线程对应硬件资源, 并加载到相应的硬件中, 可以包括以 下步骤:
步骤 2006、 虚拟资源分配装置在第二数据结构中读取轻量级进 程下所有用户级线程的读集合和写集合;
步骤 2007、 将轻量级进程下与轻量级进程绑定的用户级线程的 读集合和写集合加载到相应的硬件中。
步骤 2004、 步骤 2005实现在用户空间恢复用户级线程对应硬件 资源; 步骤 2006、 步骤 2007实现在内核空间恢复用户级线程对应硬 件资源。 在用户空间恢复和在内核空间恢复没有先后时间顺序。
其中, 在事务内存中, 当用户级线程重新调度时, 虚拟资源分配 装置可以在用户级线程的控制数据块中的第一数据结构读取用户级 线程的读集合和写集合, 即 Rsetl 和 Wsetl , 然后将该值写入本地处 理器的读写集合寄存器, 即加载到相应的硬件中。 以实现在用户空间 恢复用户级线程对应的硬件资源。
当轻量级进程重新调度时,虚拟资源分配装置可以在轻量级进程 的控制数据块中的第二数据结构读取用户级线程的读集合和写集合, 即 Rset2和 Wset2, 然后将该值写入本地处理器的读写集合寄存器, 即加载到相应的硬件中。以实现在内核空间恢复用户级线程对应的硬 件资源。
本方案, 通过从第一数据结构和第二数据结构中, 分别取用户级 线程的读集合与写集合和轻量级进程下所有用户级线程的读集合与 写集合, 并加载到相应的硬件, 可以准确地恢复用户级线程对应的硬 件资源, 从而准确地实现事务内存方法。
具体地, 在数据竟争检查应用中, 在上述方案基础上, 当用户级 线程挂起时, 可以包括以下步骤:
步骤 3001、 虚拟资源分配装置在本地读取用户级线程的向量时 钟;
可选的, 当用户级线程的挂起是由于轻量级进程挂起引起时, 步 骤 3001 : 具体为虚拟资源分配装置在本地读取轻量级进程绑定的所 有用户级线程的向量时钟。
步骤 3002、 将用户级线程的向量时钟写入第一数据结构。
步骤 3003、 将轻量级进程下与所述轻量级进程绑定的用户级线 程的向量时钟写入第二数据结构。
步骤 3002实现在用户空间保存用户级线程对应硬件资源; 步骤 3003 实现在内核空间保存用户级线程对应硬件资源。 在用户空间保 存和在内核空间保存没有先后时间顺序。
其中, 数据竟争检查中, 需要保存的硬件资源为向量时钟, 因此 当用户级线程挂起时,虚拟资源分配装置在本地读取用户级线程的向 量时钟。虚拟资源分配装置可以在本地处理器的向量时钟寄存器中读 取用户级线程的向量时钟, 然后将该值写入第一数据结构, 该第一数 据结构可以记为 Thread— VectorClock (线程—向量时钟), 以实现在用户 空间保存用户级线程对应硬件资源。
当轻量级进程挂起时, 除现有技术的操作外, 虚拟资源分配装置 可以在本地处理器的向量时钟寄存器中读取轻量级进程下所有用户 级线程的向量时钟。 然后将该值写入第二数据结构, 该第二数据结构 可以记为 LWP— VectorScalarClock (轻量级进程—向量时钟), 以在内核空 间保存用户级线程对应硬件资源。
本方案,通过在本地读取用户级线程的向量时钟和轻量级进程下 所有用户级线程的向量时钟, 分别写入第一数据结构和第二数据结 构, 可以准确地保存用户级线程对应的硬件资源, 从而准确地实现数 据竟争检查方法。 相应地, 在上述方案基础上, 当用户级线程重新调度时, 虚拟资 源分配装置在第一数据结构中读取用户级线程对应硬件资源,并加载 到相应的硬件中, 可以包括以下步骤:
步骤 3004、 当用户级线程重新调度时, 虚拟资源分配装置在第 一数据结构中读取用户级线程的向量时钟;
步骤 3005、 将用户级线程的向量时钟加载到相应的硬件中。 当轻量级进程重新调度时,虚拟资源分配装置在第二数据结构中 读取用户级线程对应硬件资源, 并加载到相应的硬件中, 可以包括以 下步骤:
步骤 3006、 当轻量级进程重新调度时, 虚拟资源分配装置在第 二数据结构中读取轻量级进程下所有用户级线程的向量时钟;
步骤 3007、 将轻量级进程下与所述轻量级进程绑定的用户级线 程的向量时钟加载到相应的硬件中。
步骤 3004、 步骤 3005实现在用户空间恢复用户级线程对应硬件 资源; 步骤 3006、 步骤 3007实现在内核空间恢复用户级线程对应硬 件资源。 在用户空间恢复和在内核空间恢复没有先后时间顺序。
其中, 在数据竟争检查中,当用户级线程重新调度时, 除现有技 术的操作,虚拟资源分配装置可以在用户级线程的控制数据块中的第 一数据结构读取用户级线程的向量时钟值 Thread— VectorClock,然后将 该值写入本地处理器的向量时钟寄存器, 即加载到相应的硬件中。 以 实现在用户空间恢复用户级线程对应的硬件资源。
当轻量级进程重新调度时, 除现有技术的操作,虚拟资源分配装 置可以在轻量级进程的控制数据块中的第二数据结构读取用户级线 程的向量时钟值 LWP— VectorClock, 然后将该值写入本地处理器的向 量时钟寄存器, 即加载到相应的硬件中。 以实现在内核空间恢复用户 级线程对应的硬件资源。
本方案, 通过从第一数据结构和第二数据结构中, 分别读取用户 级线程的向量时钟和轻量级进程下所有用户级线程的向量时钟,并加 载到相应的硬件, 可以准确地恢复用户级线程对应的硬件资源, 从而 准确地实现数据竟争方法。
图 3 为本发明实施例提供的一种虚拟资源分配装置的结构示意 图, 该虚拟资源分配装置用于实现上述实施例的虚拟资源分配方法, 参考图 3 , 该虚拟资源分配装置包括: 第一保存单元 11 和第二保存 单元 21。
其中, 第一保存单元 11 , 用于在用户级线程挂起时在用户级线 程的控制数据块中保存所述用户级线程对应硬件资源。
第二保存单元 21 , 用于在所述用户级线程对应的轻量级进程的 控制数据块中保存所述用户级线程对应的硬件资源。
在上述方案基础上, 该装置还可以优选包括: 第一加载单元 31 和第二加载单元 41。 参考图 4所示, 图 4为本发明又一实施例提供 的一种虚拟资源分配装置的结构示意图。
其中, 第一加载单元 31 , 用于读取第一保存单元 11在用户级线 程的控制数据块中保存的用户级线程对应硬件资源,并加载到所述硬 件资源对应的硬件中。
第二加载单元 41 , 用于读取第二保存单元 21在轻量级进程的控 制数据块中保存的用户级线程对应的硬件资源,并加载 'J所述硬件资 源对应的硬件中。
进一步地, 第一保存单元 11可以包括: 第一增加子单元 111和 第一保存子单元 112。
其中, 第一增加子单元 111 , 用于在用户级线程的控制数据块中 增加第一数据结构; 第一保存子单元 112 , 用于将用户级线程对应硬件资源保存至第 一增加子单元 i n增加的第一数据结构;
具体地, 第一加载单元 31 , 具体用于:
在第一增加子单元 111 增加的第一数据结构中读取用户级线程 对应硬件资源, 并加载到相应的硬件中。
第二保存单元 21可以包括: 第二增加子单元 211和第二保存子 单元 212。
第二增加子单元 211 , 用于在轻量级进程的控制数据块中增加第 二数据结构;
第二保存子单元 212 , 用于将用户级线程对应硬件资源保存至第 二增加子单元 211增加的第二数据结构。
第二加载单元 41 , 具体用于:
在第二增加子单元 211 增加的第二数据结构中读取用户级线程 对应硬件资源, 并加载到相应的硬件中。
本方案, 用户级线程挂起时, 通过第一保存单元既在用户级线程 的控制数据块中保存用户级线程对应硬件资源,通过第二保存单元也 在轻量级进程的控制数据块中保存用户级线程对应的硬件资源,即同 时在用户空间和内核空间同时为用户级线程虚拟硬件资源,可以准确 识别用户级线程, 并准确的为用户级线程虚拟硬件资源, 提高了事务 内存、 数据竟争检查和确定性重放等方法的准确性。
进一步可选的, 参照图 5所示, 图 5为本发明另一实施例提供的 一种虚拟资源分配装置的结构示意图。 虚拟资源分配装置还包括: 第一读取单元 51a, 用于当用户级线程挂起时, 在本地读取用户 级线程的硬件资源。
第二读取单元 51b , 用于当轻量级进程挂起时, 在本地读取轻量 级进程绑定的所有用户级线程的硬件资源。
可选的, 硬件资源包括: 用户级线程对应的标量时钟、 读集合和 写集合或向量时钟。
图 6 为本发明再一实施例提供的一种虚拟资源分配装置的结构 示意图。该虚拟资源分配装置用于实现上述方法实施例提供的虚拟资 源分配装置方法,当然该虚拟资源分配装置可以为计算机或计算机上 的功能实体, 包括至少一个处理器 61、 存储器 62、 及总线 63 , 总线 63用于实现处理器 61、 存储器 62之间的连接及通信, 存储器 62用 于存储处理器 61执行的程序代码及数据,
该总线 63 可以是工业标准体系结构 ( Industry Standard Architecture ,简称为 ISA )总线、夕卜部设备互连( Peripheral Component, 简称为 PCI ) 总线或扩展工业标准体系结构 ( Extended Industry Standard Architecture , 简称为 EISA ) 总线等, 此处并不限定。 该总 线 63可以分为地址总线、 数据总线、 控制总线等。 为便于表示, 图 6 中仅用一条粗线表示, 但并不表示仅有一根总线或一种类型的总 线。 其中: 存储器 62用于存储数据或可执行程序代码, 其中程序代码包括 计算机操作指令, 具体可以为: 操作系统、 应用程序等。 存储器 62 可能包含高速 RAM 存储器, 也可能还包括非易失性存储器 ( non-volatile memory ) , 例 ^口至少一个磁盘存 4诸器。
处理器 61可能是一个中央处理器 (Central Processing Unit, 简 称为 CPU ) , 或者是特定集成电路 ( Application Specific Integrated Circuit, 简称为 ASIC ) , 或者是被配置成实施本发明实施例的一个 或多个集成电路。
处理器 61用于通过执行存储器 62中的程序代码实现上述实施例 中的虚拟资源分配方法, 具体包括:
在用户级线程挂起时,在用户级线程的控制数据块中保存用户级 线程对应硬件资源;
在用户级线程对应的轻量级进程的控制数据块中保存用户级线 程对应的硬件资源。
本方案, 用户级线程挂起时, 通过既在用户级线程的控制数据块 中保存用户级线程对应硬件资源,也在轻量级进程的控制数据块中保 存用户级线程对应的硬件资源,即同时在用户空间和内核空间同时为 用户级线程虚拟硬件资源, 可以准确识别用户级线程, 并准确的为用 户级线程虚拟硬件资源, 提高了事务内存、数据竟争检查和确定性重 放等方法的准确性。
在上述方案基础上, 处理器 61还用于:
读取所述用户级线程的控制数据块中保存的用户级线程对应硬 件资源, 并加载到硬件资源对应的硬件中;
读取轻量级进程的控制数据块中保存的用户级线程对应的硬件 资源 , 并加载到硬件资源对应的硬件中。
本方案, 在用户级线程重新调度时, 通过既在在用户级线程的控 制数据块中恢复用户级线程对应硬件资源,也在在轻量级进程的控制 数据块中恢复用户级线程对应的硬件资源,即同时在在用户空间和内 核空间同时恢复用户级线程对应的硬件资源,可以准确识别用户级线 程, 从而准确的为用户级线程恢复对应的硬件资源。
进一步可选的, 处理器 61具体用于: 在用户级线程的控制数据块中增加第一数据结构; 将用户级线程对应硬件资源保存至第一数据结构;
在第一数据结构中读取用户级线程对应硬件资源,并加载到硬件 资源对应硬件中。
进一步可选的, 处理器 61具体用于:
在轻量级进程的控制数据块中增加第二数据结构;
将用户级线程对应硬件资源保存至第二数据结构;
在第二数据结构中读取用户级线程对应硬件资源,并加载到硬件 资源对应的硬件中。
本方案, 通过从第一数据结构中读取用户级线程对应硬件资源, 以实现在用户空间恢复用户级线程对应硬件资源,通过在第二数据结 构中读取用户级线程对应硬件资源,以实现在内核空间恢复用户级线 程对应硬件资源, 从而更准确地位用户级线程恢复硬件资源。
进一步的处理器 61还用于当所述用户级线程挂起时, 在本地读 取所述用户级线程的硬件资源;
或者, 当所述轻量级进程挂起时, 在本地读取所述轻量级进程绑 定的所有用户级线程的硬件资源。
可选的, 硬件资源包括: 用户级线程对应的标量时钟、 读集合和 写集合或向量时钟。
本方案,通过在本地读取用户级线程的向量时钟和轻量级进程下 所有用户级线程的向量时钟, 分别写入第一数据结构和第二数据结 构, 可以准确地保存用户级线程对应的硬件资源, 从而准确地实现数 据竟争检查方法。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了 解到本发明可以用硬件实现,或固件实现,或它们的组合方式来实现。 当使用软件实现时,可以将上述功能存储在计算机可读介质中或作为 计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介 质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地 方向另一个地方传送计算机程序的任何介质。存储介质可以是计算机 能够存取的任何可用介质。 以此为例但不限于: 计算机可读介质可以 包括 RAM、 ROM, EEPR0M、 CD-ROM或其他光盘存储、 磁盘存储介质或 者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形 式的期望的程序代码并能够由计算机存取的任何其他介质。 此外。任 何连接可以适当的成为计算机可读介质。 例如, 如果软件是使用同轴 电缆、 光纤光缆、 双绞线、 数字用户线 (DSL) 或者诸如红外线、 无 线电和微波之类的无线技术从网站、 服务器或者其他远程源传输的, 那么同轴电缆、 光纤光缆、 双绞线、 DSL或者诸如红外线、 无线和微 波之类的无线技术包括在所属介质的定影中。 如本发明所使用的, 盘 (Disk)和碟(disc) 包括压缩光碟(CD) 、 激光碟、 光碟、 数字通 用光碟 (DVD) 、 软盘和蓝光光碟, 其中盘通常磁性的复制数据, 而 碟则用激光来光学的复制数据。上面的组合也应当包括在计算机可读 介质的保护范围之内。
以上所述, 仅为本发明的具体实施方式, 但本发明的保护范围并不局 限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内, 可轻易想到变化或替换, 都应涵盖在本发明的保护范围之内。 因此, 本发明的保护范围应所述以权利要求的保护范围为准。

Claims

权利要求
1、 一种虚拟资源分配方法, 其特征在于, 包括:
在用户级线程挂起时,虚拟资源分配装置在用户级线程的控制数 据块中保存所述用户级线程对应硬件资源; 所述虚拟资源分配装置在所述用户级线程对应的轻量级进程的 控制数据块中保存所述用户级线程对应的硬件资源。
2、 根据权利要求 1所述的虚拟资源分配方法, 其特征在于, 所 述方法还包括: 所述虚拟资源分配装置读取所述用户级线程的控制数据块中保 存的所述用户级线程对应硬件资源,并加载到所述硬件资源对应的硬 件中;
所述虚拟资源分配装置读取所述轻量级进程的控制数据块中保 存的所述用户级线程对应的硬件资源,并加载到所述硬件资源对应的 硬件中。
3、 根据权利要求 1或 2所述的方法, 其特征在于, 所述虚拟资 源分配装置在用户级线程的控制数据块中保存所述用户级线程对应 硬件资源, 具体包括: 所述虚拟资源分配装置在所述用户级线程的控制数据块中增加 第一数据结构; 将所述用户级线程对应硬件资源保存至所述第一数据结构; 所述虚拟资源分配装置读取所述用户级线程的控制数据块中保 存的所述用户级线程对应硬件资源,并加载到所述硬件资源对应的硬 件中, 具体包括: 所述虚拟资源分配装置在所述第一数据结构中读取所述用户级 线程对应硬件资源, 并加载到所述硬件资源对应硬件中。
4、 根据权利要求 1或 2所述的方法, 其特征在于, 所述虚拟资 源分配装置在轻量级进程的控制数据块中保存所述用户级线程对应 的硬件资源, 具体包括: 所述虚拟资源分配装置在所述轻量级进程的控制数据块中增加 第二数据结构; 将所述用户级线程对应硬件资源保存至所述第二数据结构; 所述虚拟资源分配装置读取所述轻量级进程的控制数据块中保 存的所述用户级线程对应的硬件资源,并加载到所述硬件资源对应的 硬件中, 具体包括: 所述虚拟资源分配装置在所述第二数据结构中读取所述用户级 线程对应硬件资源, 并加载到所述硬件资源对应的硬件中。
5、 根据权利要求 1所述的方法, 其特征在于, 所述方法还包括: 当所述用户级线程挂起时,所述虚拟资源分配装置在本地读取所 述用户级线程的硬件资源。
6、 根据权利要求 1所述的方法, 其特征在于, 所述方法还包括: 当所述轻量级进程挂起时,所述虚拟资源分配装置在本地读取所 述轻量级进程绑定的所有用户级线程的硬件资源。
7、 根据权利要求 1-6任一项所述的方法, 其特征在于, 所述硬 件资源包括: 用户级线程对应的标量时钟、读集合和写集合或向量时 钟。
8、 一种虚拟资源分配装置, 其特征在于, 包括: 第一保存单元,用于在用户级线程挂起时在用户级线程的控制数 据块中保存所述用户级线程对应硬件资源;
第二保存单元,用于在所述用户级线程对应的轻量级进程的控制 数据块中保存所述用户级线程对应的硬件资源。
9、 根据权利要求 8所述的装置, 其特征在于, 所述装置还包括: 第一加载单元,用于读取所述第一保存单元在所述用户级线程的 控制数据块中保存的所述用户级线程对应硬件资源,并加载到所述硬 件资源对应的硬件中;
第二加载单元,用于读取所述第二保存单元在所述轻量级进程的 控制数据块中保存的所述用户级线程对应的硬件资源,并加载到所述 硬件资源对应的硬件中。
10、 根据权利要求 8或 9所述的装置, 其特征在于, 所述第一保 存单元包括:
第一增加子单元,用于在所述用户级线程的控制数据块中增加第 一数据结构; 第一保存子单元,用于将所述用户级线程对应硬件资源保存至所 述第一增加子单元增加的所述第一数据结构; 所述第一加载单元, 具体用于: 在所述第一增加子单元增加的所述第一数据结构中读取所述用 户级线程对应硬件资源, 并加载到相应的硬件中。
11、 根据权利要求 8或 9所述的装置, 其特征在于, 所述第二保 存单元包括:
第二增加子单元,用于在所述轻量级进程的控制数据块中增加第 二数据结构;
第二保存子单元,用于将所述用户级线程对应硬件资源保存至所 述第二增加子单元增加的所述第二数据结构; 所述第二加载单元, 具体用于: 在所述第二增加子单元增加的所述第二数据结构中读取所述用 户级线程对应硬件资源, 并加载到相应的硬件中。
12、根据权利要求 8所述的装置,其特征在于,所述装置还包括: 第一读取单元, 用于当所述用户级线程挂起时, 所述虚拟资源分 配装置在本地读取所述用户级线程的硬件资源。
13、根据权利要求 8所述的装置,其特征在于,所述装置还包括: 第二读取单元, 用于当所述轻量级进程挂起时, 所述虚拟资源分 配装置在本地读取所述轻量级进程绑定的所有用户级线程的硬件资 源。
14、 根据权利要求 8-13任一项所述的装置, 其特征在于, 所述 硬件资源包括: 用户级线程对应的标量时钟、读集合和写集合或向量
^
ZSC980/M0ZN3/X3d Z8S6C0/ST0Z OAV
PCT/CN2014/086352 2013-09-22 2014-09-12 一种虚拟资源分配方法及装置 WO2015039582A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310444885.X 2013-09-22
CN201310444885.XA CN104461730B (zh) 2013-09-22 2013-09-22 一种虚拟资源分配方法及装置

Publications (1)

Publication Number Publication Date
WO2015039582A1 true WO2015039582A1 (zh) 2015-03-26

Family

ID=52688236

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/086352 WO2015039582A1 (zh) 2013-09-22 2014-09-12 一种虚拟资源分配方法及装置

Country Status (2)

Country Link
CN (1) CN104461730B (zh)
WO (1) WO2015039582A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108696373A (zh) * 2017-04-06 2018-10-23 华为技术有限公司 虚拟资源分配方法、nfvo和系统

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105677487B (zh) * 2016-01-12 2019-02-15 浪潮通用软件有限公司 一种控制资源占用的方法及装置
US10430245B2 (en) * 2017-03-27 2019-10-01 Hong Kong Applied Science And Technology Research Institute Co., Ltd. Systems and methods for dynamic low latency optimization
CN107329812B (zh) * 2017-06-09 2018-07-06 腾讯科技(深圳)有限公司 一种运行协程的方法和装置
CN115586967B (zh) * 2022-10-10 2023-04-18 河南省人民医院 一种成人呼吸监测设备及系统
CN116028118B (zh) * 2023-01-31 2023-07-25 南京砺算科技有限公司 保障数据一致性的指令执行方法及图形处理器、介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1938686A (zh) * 2004-03-31 2007-03-28 英特尔公司 提供用户级多线程操作的方法和系统
CN101030152A (zh) * 2007-03-20 2007-09-05 华为技术有限公司 基于伪同步方式的操作控制方法及装置
US20080222401A1 (en) * 2007-03-07 2008-09-11 Dewey Douglas W Method and system for enabling state save and debug operations for co-routines in an event-driven environment
CN101556545A (zh) * 2009-05-22 2009-10-14 北京星网锐捷网络技术有限公司 一种实现进程支持的方法、装置和多线程系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2881239B1 (fr) * 2005-01-24 2007-03-23 Meiosys Soc Par Actions Simpli Procede de gestion d'acces a des ressources partagees dans un environnement multi-processeurs
US8079035B2 (en) * 2005-12-27 2011-12-13 Intel Corporation Data structure and management techniques for local user-level thread data
WO2010095182A1 (ja) * 2009-02-17 2010-08-26 パナソニック株式会社 マルチスレッドプロセッサ及びデジタルテレビシステム
CN103049328B (zh) * 2012-11-06 2016-03-02 武汉新光电网科信息技术有限公司 计算机系统中内存资源分配方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1938686A (zh) * 2004-03-31 2007-03-28 英特尔公司 提供用户级多线程操作的方法和系统
US20080222401A1 (en) * 2007-03-07 2008-09-11 Dewey Douglas W Method and system for enabling state save and debug operations for co-routines in an event-driven environment
CN101030152A (zh) * 2007-03-20 2007-09-05 华为技术有限公司 基于伪同步方式的操作控制方法及装置
CN101556545A (zh) * 2009-05-22 2009-10-14 北京星网锐捷网络技术有限公司 一种实现进程支持的方法、装置和多线程系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108696373A (zh) * 2017-04-06 2018-10-23 华为技术有限公司 虚拟资源分配方法、nfvo和系统
CN108696373B (zh) * 2017-04-06 2019-09-20 华为技术有限公司 虚拟资源分配方法、nfvo和系统

Also Published As

Publication number Publication date
CN104461730B (zh) 2017-11-07
CN104461730A (zh) 2015-03-25

Similar Documents

Publication Publication Date Title
JP6342970B2 (ja) トランザクショナルメモリ(tm)システムにおける読み出し及び書き込み監視属性
WO2015039582A1 (zh) 一种虚拟资源分配方法及装置
US8424015B2 (en) Transactional memory preemption mechanism
JP5608738B2 (ja) 無制限トランザクショナルメモリ(utm)システムの最適化
US7966459B2 (en) System and method for supporting phased transactional memory modes
JP7118984B2 (ja) ロード命令のメモリ・アクセスを回避するためのコンピュータ実装方法、システム、およびコンピュータ・プログラム
US8281318B2 (en) Systems and methods for inter process communication based on queues
CN110659256B (zh) 多机房同步方法、计算设备及计算机存储介质
KR20090025295A (ko) 가상 트랜잭션 메모리를 위한 글로벌 오버플로우 방법
US9542112B2 (en) Secure cross-process memory sharing
US20100131720A1 (en) Management of ownership control and data movement in shared-memory systems
JP2020535512A (ja) 例外マスク更新命令後のトランザクションの非アボート処理の許可
CN110892384A (zh) 对处理器未定义行为依赖的重放时间行程跟踪
RU2597506C2 (ru) Неограниченная транзакционная память с гарантиями продвижения при пересылке, используя аппаратную глобальную блокировку
JP6023765B2 (ja) 無制限トランザクショナルメモリ(utm)システムの最適化
CN111221573B (zh) 一种寄存器访问时序的管理方法、处理器、电子设备及计算机可读存储介质
US11481250B2 (en) Cooperative workgroup scheduling and context prefetching based on predicted modification of signal values
US11593113B2 (en) Widening memory access to an aligned address for unaligned memory operations
JP6395203B2 (ja) データ制御システム、データ制御方法およびデータ制御用プログラム
JP7078380B2 (ja) 命令制御装置、命令制御方法およびプログラム
JP2814683B2 (ja) 命令処理装置
JP6239212B1 (ja) シミュレーション装置、シミュレーション方法及びシミュレーションプログラム
JP6318440B2 (ja) 無制限トランザクショナルメモリ(utm)システムの最適化
CN118174980A (zh) 一种报文处理方法、装置、电子设备、存储介质及车辆
Mohan Operating Systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14846593

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14846593

Country of ref document: EP

Kind code of ref document: A1