CN108885604A - Memory construction software implement scheme - Google Patents
Memory construction software implement scheme Download PDFInfo
- Publication number
- CN108885604A CN108885604A CN201680080706.0A CN201680080706A CN108885604A CN 108885604 A CN108885604 A CN 108885604A CN 201680080706 A CN201680080706 A CN 201680080706A CN 108885604 A CN108885604 A CN 108885604A
- Authority
- CN
- China
- Prior art keywords
- memory
- node
- memory bank
- processing node
- memories
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/18—File system types
- G06F16/1847—File system types specifically adapted to static storage, e.g. adapted to flash memory or SSD
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/06—Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
- G06F12/0646—Configuration or reconfiguration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/10—Address translation
- G06F12/1072—Decentralised address translation, e.g. in distributed shared memory systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0662—Virtualisation aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/21—Employing a record carrier using a specific recording technology
- G06F2212/214—Solid state disk
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The hardware based processing node of object memories structure may include memory module, which stores in object-based storage device space and manage one or more memory objects.Each memory object can locally create in memory module, and input/output (I/O) instruction is not had to using single memory reference instruction and is accessed, and is managed by memory module in single memory layer.Memory module can provide interface layer under the application layer of software stack.Interface layer may include one or more memory bank managers, the hardware of the memory bank manager administration processor and part for controlling object-based storage device space is visible for the virtual address space and physical address space of processor.Memory bank manager can further provide for object-based storage device space and by the interface between the operating system of processor execution, and providing is the memory bank based on object memories of transparent substitution to the software of the interface layer is used.
Description
Cross reference to related applications
This application claims entitled " " the Infinite Memory Fabric that Frank et al. was submitted on December 8th, 2015
The U.S. Provisional Application No.62/264 of Software Implementation ", 731 equity at 35 USC119 (e),
Complete disclosure is incorporated herein by reference for all purposes.
The application further relates to following co-pending and commonly assigned U.S. Patent application:
The U.S. by Frank entitled " the Object Based Memory Fabric " submitted on January 20th, 2016 is special
Benefit application No.15/001,320;
By the beauty of Frank entitled " the Trans-Cloud Object Based Memory " submitted on January 20th, 2016
State patent application No.15/001,332;
Entitled " the Universal Single Level Object Memory submitted by Frank on January 20th, 2016
The U.S. Patent application No.15/001,340 of Address Space ";
Entitled " the Object Memory Fabric Performance submitted by Frank on January 20th, 2016
The U.S. Patent application No.15/001,343 of Acceleration ";
Entitled " the Distributed Index for Fault submitted by Frank on January 20th, 2016
The U.S. Patent application No.15/001,451 of Tolerance Object Memory Fabric ";
Entitled " the Implementation of an Object Memory submitted by Frank on January 20th, 2016
The U.S. Patent application No.15/001,494 of Centric Cloud ";
Entitled " the Managing MetaData in an Object submitted by Frank on January 20th, 2016
The U.S. Patent application No.15/001,524 of Memory Fabric ";
Entitled " the Utilization of a Distributed Index submitted by Frank on January 20th, 2016
The U.S. Patent application No.15/001,652 of to Provide Object Memory Fabric Coherency ";
Entitled " the Object Memory Data Flow Instruction submitted by Frank on January 20th, 2016
The U.S. Patent application No.15/001,366 of Execution ";
By the beauty of Frank entitled " the Object Memory Data Flow Trigger " submitted on January 20th, 2016
State patent application No.15/001,490;
By the U.S. of Frank entitled " the Object Memory Instruction Set " submitted on January 20th, 2016
Patent application No.15/001,526;
Entitled " the Infinite Memory Fabric Streams and submitted by Frank on May 31st, 2016
The U.S. Patent application No.15/168,965 of APIS ";
Entitled " the Infinite Memory Fabric Hardware submitted by Frank on May 31st, 2016
The U.S. Patent application No.15/169,580 of Implementation with Memory ";
Entitled " the Infinite Memory Fabric Hardware submitted by Frank on May 31st, 2016
The U.S. Patent application No.15/169,585 of Implementation with Router ";
Entitled " the Memory Fabric Operations and Coherency Using being filed concurrently herewith
The U.S. Patent application No._______________ (acting on behalf of Reference Number 8620-16) of Fault Tolerant Objects ";And
Entitled " the Object Memory Interfaces Across Shared Links " being filed concurrently herewith
U.S. Patent application No._______________ (acting on behalf of Reference Number 8620-17), the complete disclosure of each of which passes through
It is incorporated herein by reference for all purposes.
Background technique
The method that the embodiment of the present invention relates generally to improve the performance of the processing node in structure (fabric)
And system, and (processing), memory (memory), memory bank are handled more particularly, to management is changed
(storage), network and the mode of cloud computing are to significantly improve the efficiency of commodity hardware and the method and system of performance.
As the size and complexity of data and the process executed to it are continuously increased, computer hardware is faced with satisfaction
The challenge of these demands.Current commodity hardware and software solution from existing server, network and memory bank supplier
It is unable to satisfy the demand of cloud computing and big data environment.This be at least partly due to these system administrations processing, memory and
Caused by the mode of memory bank.Specifically, in current system, processing is separated with memory, memory and then and memory bank
Separation, each of processing, memory and memory bank are individually by software management.Each server and other computing devices
(herein referred as node) is separated by physical computer network with other nodes in turn, individually by software management, and in turn,
Individual processing associated with each node, memory and memory bank are by the software management on the node.
Fig. 1 be show mask data memory bank in the commodity server and network components of the prior art, memory and
The exemplary block diagram of processing.This example illustrate systems 100, and wherein commodity server 105 and 110 is via as known in the art
Physical network 115 and network software 155 are communicatively coupled with one another.It is also known to the art that server can respectively execute it is any
Any amount of one or more application 120a, 120b, 120c of type.It is known in the art that using physical store is stored in
Data in body 150 execute each application on the processor (not shown) and memory (not shown) of server 105 and 110
120a,120b,120c.Each of server 105 and 110 keep catalogue 125, the catalogue 125 mapping using 120a,
The position of data used in 120b, 120c.In addition, each server is that each application 120a, 120b, 120c in execution are real
Existing software stack, the software stack include data application indicate 130, database representation 135, file system indicate 140, with
And storage indicates 145.
Although effectively, there are three reasons, so that these are from existing server, network and memory bank supplier
The implementation of current commodity hardware and software solution is unable to satisfy cloud computing and the growing need of big data environment
It asks.The reason of the shortcomings that these implementations is its complexity.Software stack must be in place, and each application must manage
It manages memory bank, memory and the separation of processing and applies parallel server resource.Each application must be balanced against algorithm concurrency,
Data organization and data are mobile, this is very challenging for correctness, needless to say consider performance and economy.
This often leads to implement in the application solution more towards batch, rather than most enterprises it is preferred it is integrated in real time
Solution.In addition, the separation of memory bank, memory and processing in this realization is also sought for each layer of software stack
It looks for, move and accesses data block and cause significant poor efficiency, this is because required instruction execution and software stack is each
Delay between the delay and layer of layer.In addition, a possibility that this poor efficiency limits economic scale, and limit in addition to most
For the size of data of all algorithms except parallel algorithm.The reason of the latter, is, due to Amdahl's law (Amdahl ' s
Law), the efficiency that server (processor or thread) can interact limits degree of parallelism.Therefore, it is necessary to manage processing, deposit
Reservoir and memory bank are to significantly improve the improved method and system of the performance of processing node.
Summary of the invention
The embodiment provides for management processing, memory, memory bank, network and cloud computing with significantly
Improve the system and method for the efficiency and performance of processing node.Object-based storage device may be implemented in embodiment as described herein
Structure manages the memory object in storage organization in memory layer rather than in application layer.It object-based is deposited to this
The interface of reservoir structures can be realized under the application layer of software stack.In this way, object-based storage device
Difference between the space of normal address is transparent for that should be used to say that, the application can use object-based storage device and
Without modifying, have the function of object-based storage device and performance advantage.On the contrary, modified memory bank manager can
With by system software (such as standard operation system, such as Linux) interface to object-based storage device.These are modified to deposit
Storage body manager can provide the management for standard processor hardware (such as buffer and cache), can control described
The part in object-based storage device space relatively narrow physical address space available for processor is as it can be seen that and can be by applying
It is accessed by the system software of standard.In this way, using can be by system software (for example, the operation for passing through standard
System memory allocation process), to access and using object-based storage device structure without modifying.
According to one embodiment, the hardware based processing node of object memories structure may include memory module,
The memory module stores in object-based storage device space and manages one or more memory objects.Each memory
Object is locallyd create in the memory module, and input/output (I/O) is not had to using single memory reference instruction and is referred to
Access is enabled, and by memory module in single memory layer-management.Memory module can software stack application layer it
Lower offer interface layer.The interface layer may include one or more memory bank managers, the memory bank manager administration processor
Hardware and control the virtual address space and physical address of the part for processor in object-based storage device space
Space is visible.The one or more memory bank manager can further provide for object-based storage device space and by processor
Interface between the operating system of execution, and the memory bank based on object memories of substitution can be further provided for, this is replaced
Generation it is transparent for file system, database or other software based on the memory bank of object memories.In some embodiments, should
Operating system may include Linux or security-enhanced Linux (SELinux).
The interface layer can be held by the memory distribution function of operating system in the application layer of software stack access
Capable one or more application provides the access to object-based storage device space.In one embodiment, which can
With include operating system library file object-based storage device particular version.The one or more memory bank manager can
To utilize the format in object-based storage device space and addressing.Additionally or alternatively, the one or more memory bank management
Device may include such as database manager, graph data librarian and/or file system manager.
In one embodiment, which may include dual-in-line memory module (DIMM)
Card.In other cases, which may include commodity server, and wherein memory module includes
Dual-in-line memory module (DIMM) card being mounted in commodity server.In other cases, the hardware based processing
Node may include mobile computing device.In other embodiments, which may include single core
Piece.
According to another embodiment, object memories structure may include multiple hardware based processing nodes.Each it is based on
The processing node of hardware may include memory module, which stores and manage in object-based storage device space
Manage one or more memory objects.Each memory object is locallyd create in memory module, and single memory is used
Reference instruction and do not have to input/output (I/O) instruction accessing, and be by memory module in single memory layer-management.
Node router can be communicatively coupled with each of one or more memory modules of the node, and is adapted for
The part of route memory object or memory object between one or more memory modules of the node.One or more
Router can be communicatively coupled with each node router between a node.It is every in multiple nodes of the object memories structure
It is a to be coupled to router communication between at least one node, and it is adapted for route memory object among multiple nodes
Or the part of memory object.
Each memory module can provide interface layer under the application layer of software stack.The interface layer may include one
A or multiple memory bank managers, the hardware of the memory bank manager administration processor simultaneously control the object-based storage device
The part in space is visible for the virtual address space and physical address space of processor.The one or more memory bank manager
It can further provide for object-based storage device space and by the interface between the operating system of processor execution.For example, should
Operating system may include Linux or security-enhanced Linux (SELinux).The one or more memory bank manager may be used also
With provide substitution the memory bank based on object memories, the substitution based on the memory bank of object memories for file system
System, database or other software are transparent.
The interface layer can be held by the memory distribution function of operating system in the application layer of software stack access
Capable one or more application provides the access to object-based storage device space.In one embodiment, which can
With include operating system library file object-based storage device particular version.The one or more memory bank manager can
To utilize the format in object-based storage device space and addressing.Additionally or alternatively, the one or more memory bank management
Device may include such as database manager, graph data librarian and/or file system manager.
According to another embodiment, for by object-based storage device structure and the one of object-based storage device structure
The method of the software interfaces executed on a or multiple nodes may include:By object-based storage device structure based on hardware
Processing node, it is hardware based processing node memory module in locally create each memory object;By based on hard
The processing node of part does not have to each memory pair of input/output (I/O) instruction accessing using single memory reference instruction
As;And each memory object by hardware based processing node, in single memory layer-management memory module.It should
Hardware based processing node can also provide interface layer under the application layer of software stack.The interface layer includes one or more
A memory bank manager, the hardware of the memory bank manager administration processor simultaneously control object-based storage device space
Part is visible for the virtual address space and physical address space of processor.
By the one or more memory bank manager, can provide object-based storage device space with by handling
The interface between operating system that device executes.By the one or more memory bank manager, provide for using interface layer
File system, database or other software it is transparent, substitution the memory bank based on object memories.One or more storage
Body manager can use format and the addressing in object-based storage device space, and may include database manager, figure
At least one of graphic data librarian or file system manager.By the interface layer, such as pass through operating system
Memory distribution function is provided to the one or more application executed in the application layer of software stack access to object-based
The access of storage space.
Detailed description of the invention
Fig. 1 is mask data memory bank, the memory, place shown in the commodity server and network components of the prior art
The exemplary block diagram of reason, network and cloud computing.
Fig. 2 be show various embodiments of the present invention can be in the component for the exemplary distribution formula system wherein realized
Block diagram.
Fig. 3 be show various embodiments of the present invention can be in the block diagram for the illustrative computer system wherein realized.
Fig. 4 is to show the block diagram of exemplary object memory construction framework according to an embodiment of the invention.
Fig. 5 is to show the block diagram of exemplary memory structure object memories according to an embodiment of the invention.
Fig. 6 is to show the frame of exemplary object memory dynamic and physical organization according to an embodiment of the invention
Figure.
Fig. 7 is to show the object memories structure hierarchy of object memories according to an embodiment of the invention
Aspect block diagram, localize working set and allow virtually limitless scalability.
Fig. 8 be show object address space, virtual address and physical address according to an embodiment of the invention it
Between example relationship aspect block diagram.
Fig. 9 is to show showing between object size and object address space pointer according to an embodiment of the invention
The block diagram of the aspect of example relationship.
Figure 10 is to show example object memory construction distributed objects storage according to an embodiment of the invention
The block diagram of the aspect of device and index structure.
Figure 11 shows the object memories according to an embodiment of the invention executed in object memories completely
The aspect of hit situation.
Figure 12 shows object memories miss situation and object memories according to an embodiment of the invention
With the aspect of the distributed nature of object indexing.
Figure 13 be show it is according to an embodiment of the invention, consider object memories structure distribution formula object and deposit
The block diagram of the exemplary aspect of the leaf grade object memories of reservoir and index structure.
Figure 14 is to show object memories structure Router Object index structure according to an embodiment of the invention
Exemplary aspect block diagram.
Figure 15 A and Figure 15 B are shown according to one embodiment of the present of invention including node index tree construction and leaf index
The block diagram of the aspect of the example index tree construction of tree.
Figure 16 is to show the block diagram of the aspect of exemplary physical storage organization according to an embodiment of the invention.
Figure 17 A is to show the block diagram of the aspect of example object addressing according to an embodiment of the invention.
Figure 17 B shows example object memory construction pointer and block addressing according to an embodiment of the invention
The block diagram of aspect.
Figure 18 is to show the block diagram of the aspect of example object metadata according to an embodiment of the invention.
Figure 19 is to show the block diagram of the aspect of the micro- threading model of example according to an embodiment of the invention.
Figure 20 is to show the frame of the aspect of example relationship of code according to an embodiment of the invention, frame and object
Figure.
Figure 21 is to show the block diagram of the exemplary aspect of micro- thread concurrency according to an embodiment of the invention.
Figure 22 A is to show the section with hardware based object memories structure of some embodiments according to the disclosure
The exemplary block diagram flowed present on the node of subject router between point.
Figure 22 B is the software of the object memories and router on the node shown according to some embodiments of the disclosure
The exemplary block diagram of simulation.
Figure 23 is the exemplary frame of the stream in the memory construction router shown according to some embodiments of the disclosure
Figure.
Figure 24 is to show the block diagram of product line hardware implementing architecture according to certain embodiments of the present invention.
Figure 25 is to show the block diagram that framework is realized according to the substitute products series hardware of some embodiments of the disclosure.
Figure 26 is to show the memory construction server view of the hardware implementing architecture of some embodiments according to the disclosure
The block diagram of figure.
Figure 27 is to show the frame of the memory module view of the hardware implementing architecture of some embodiments according to the disclosure
Figure.
Figure 28 is to show the frame of the memory module view of the hardware implementing architecture of some embodiments according to the disclosure
Figure.
Figure 29 is to show the frame of the node router view of the hardware implementing architecture of some embodiments according to the disclosure
Figure.
Figure 30 is router view between the node for showing the hardware implementing architecture of some embodiments according to the disclosure
Block diagram.
Figure 31 is to show the memory construction router view of the hardware implementing architecture of some embodiments according to the disclosure
The block diagram of figure.
Figure 32 is to show the object memories structure that can replace software function of one embodiment according to the disclosure
The block diagram of function.
Figure 33 is to show the block diagram of the object memories infrastructure software storehouse according to one embodiment of the disclosure.
Specific embodiment
In the following description, for purposes of explanation, numerous specific details are set forth, in order to provide to of the invention each
The thorough explanation of kind embodiment.It is apparent, however, to one skilled in the art, that can there is no these specific thin
Implement the embodiment of the present invention in the case where some in section.In other cases, well known construction and device is in form of a block diagram
It shows.
Following description only provides exemplary embodiment, it is not intended to limit the scope of the present disclosure, applicability or configuration.
It can be realized exemplary embodiment on the contrary, will provide for those skilled in the art the following description of exemplary embodiment
Description.It should be appreciated that in the case where not departing from the spirit and scope of the present invention as described in appended claims, it can be right
The function and arrangement of element carries out various changes.
Provide detail in the following description to provide a thorough understanding of embodiments.However, ordinary skill
Personnel will be understood that, can practice embodiment without these specific details.For example, circuit, system, network, process
It can be shown as component in form of a block diagram with other component, in case unnecessary details makes embodiment thicken.At other
In the case of, well known circuit, process, algorithm, structure and technology can be shown, without unnecessary details, in order to avoid make reality
Example is applied to thicken.
Additionally, it should be noted that each embodiment can be described as process, it is shown as flow table, flow chart, data
Flow chart, structure chart or block diagram.Although flow chart may describe the operations as sequential process, many operations can it is parallel or
It is performed simultaneously.Furthermore, it is possible to rearrange the sequence of operation.Process terminates after the completion of its operation, but can have in figure
The additional step for not including.Process can correspond to method, function, process, subroutine, subprogram etc..When process corresponds to
When function, termination can correspond to function back to calling function or principal function.
Term " machine readable media " includes but is not limited to portable or fixed storage device, optical storage, nothing
Line channel and the various other media that can store, include or carry (multiple) instructions and/or data.Code segment or machine
Executable instruction can indicate process, function, subprogram, program, routine, subroutine, module, software package, class or instruction,
Any combination of data structure or program statement.By transmitting and/or receiving in information, data, argument and parameter or memory
Hold, code segment may be coupled to another code segment or hardware circuit.Information, argument and parameter, data etc. can be via any suitable
Means transmitting, forwarding or transmission, including Memory Sharing, message transmission, token passing, network transmission etc..In order to clearly rise
See, limits various other terms used herein now.
Virtual memory is a kind of memory management technique, provides memory for each software process and virtual address is empty
Between equally big illusion.With the operating system that different degrees of hardware combines by physical memory management be virtual address space
Cache, be located in supplementary bank, and input/output instruction accessing can be passed through.Virtual memory and file system
Separation, but can interact.
Single Level Storage is that the extension of virtual memory is only mapped to using techniques of virtual memory without file
The persistent object or section of the address space of process.The entire memory bank of computing system is considered as the address in section and section.Cause
This, is at least managed by software there are three individual address space, i.e., physical memory address/node, virtual address/process, with
And supplementary bank address/disk.
Object memory bank refers to the mode that the unit (referred to as object) of memory bank is organized.Each object includes container,
The container includes three things:Real data;Extensible meta-data;And globally unique identifier (herein referred as object
Location).The metadata of object for limit the context information about data and how to use and manage the data (including with its
The relationship of his object).
Object address space is managed on storage device, node and network by software, in the object for not knowing object
Reason searches object in the case where position.Object memory bank is separated with virtual memory and Single Level Storage, but can of course be by soft
Part interoperates.
Block memory is by evenly sized data chunk at the address based on physical location, and without metadata.
Network address is the node in IP network, associated with physical location physical address.
Node or processing node are the calculating physical unit described by sharing physical storage, the shared physical storage
By any processor addressing in node.
Object memories are by processor storage reference instruction and not need implicitly or explicitly software or input/output
Instruct and can be used as the object storage of direct memory access.Object ability directly provides in object memories, by depositing
Reservoir reference instruction is handled.
Object storage module and node are connected in single object memories by object memories structure, in such case
Under, by the direct management within hardware to object data, metadata and object address, any object stores any object
It is all local for device module.
Part of the subject router based on object address routing object or object in object memories structure.This be based on
The conventional routers for the appropriate part that data packet is forwarded to network by network address are different.
Embodiment can be by hardware, software, firmware, middleware, microcode, hardware description language or any combination thereof Lai real
It is existing.When with software, firmware, middleware or microcode to realize, program code or code segment for executing necessary task can
To be stored in machine readable media.Processor (or multiple processors) can execute necessary task.
The embodiment provides be used for management processing, memory, memory bank, network and cloud computing significantly to change
The system and method for conducting oneself well the efficiency and performance of reason node.Embodiment as described herein can be come real with the set of hardware component
It is existing, it is real by breaking the artificial difference between processing, memory, memory bank and network in current commodity solution
Processing, memory and memory bank are changed in matter, the mode that network and cloud computing are managed, to significantly improve commodity hardware
Efficiency and performance.For example, hardware element may include reference format memory module, for example, (DIMM) and one or more it is right
As the set of router.Memory module can be added to the commodity or " ready-made " hardware of such as server node, and conduct
Big data accelerator in the node.Subject router can be used for interconnecting two or more clothes being adapted to memory module
Business device or other nodes, and help across these different server admin processing, memory and memory banks.Node can be with physics
Upper close proximity is very remote.These hardware componenies can be together with commodity server or other kinds of calculate node with any group
It closes to be used together, to realize embodiment as described herein.
According to one embodiment, object-based storage device is may be implemented in such hardware component, in memory simultaneously
And object is managed in memory layer rather than in application layer.I other words in memory locally realize and manage object and
Associated attribute, so that object memories system is capable of providing the function of enhancing in the case where being not necessarily to any software, and logical
It crosses dynamically management plant characteristic (including but not limited to, persistence, position and processing) and improves performance.Object properties can also be with
Travel to higher application layer.
Such hardware component can also be (lasting by the way that memory (temporarily) and memory bank are realized and managed in object
), to eliminate the difference between memory and memory bank.These components can be by being transparently managed object (or the portion of object
Point) position eliminate the difference between local and remote memory, therefore all objects are all presented simultaneously for all nodes
It is local.These components can also be by that will handle the method for being placed in the interior object of memory itself, and carry out Processing for removing and deposit
Difference between reservoir.
According to one embodiment, such hardware component can eliminate being applied by address size, commodity server deposit
Typical sizes limitation on memory space.On the contrary, physical addressing can be managed in memory object itself, and in turn can be with
Object is accessed and managed by object name space.
Embodiment as described herein can by reduce or eliminate usually with memory management, memory bank management, network and
The associated expense of data directory (overhead) accelerates to provide transparent and dynamic performance, especially for big data or
Other memory intensive applications.On the contrary, management of the storage level to memory object can be significantly reduced memory and
Access between memory bank and between memory and processing, to eliminate the associated overhead between each of which.It below will ginseng
Examine the various additional details that attached drawing describes the embodiment of the present invention.
Fig. 2 can be achieved on the block diagram of the component of the exemplary distribution formula system of various embodiments of the present invention.It is showing
Embodiment in, distributed system 200 includes one or more client computing devices 202,204,206 and 208, is configured to
Client application is executed or operated on one or more networks 210, and client application is, for example, Web browser, dedicated client
End etc..Server 212 can be via network 210 and Terminal Server Client computing device 202,204,206 and 208 communicatedly coupling
It closes.
In various embodiments, server 212 can be adapted for running by the one of one or more components offer of system
A or multiple services or software application.In some embodiments, these services can be used as service based on Web or cloud service or
Serviced with software (Software as a Service, SaaS) mode be supplied to client computing devices 202,204,206,
And/or 208 user.Operate client computing devices 202,204,206, and/or 208 user can in turn using one or
Multiple client application to interact with server 212, to utilize the service provided by these components.For the sake of clarity, should
Note that server 212 and database 214,216 can correspond to above with reference to server 105 described in Fig. 1.Network 210 can be with
It is part or the extension of physical network 115.It is also understood that there may be any amount of client computing devices 202,204,
206,208 and server 212, each of which has one or more databases 214,216.
In the configuration being painted in figure, the software component 218,220 and 222 of system 200 is shown as on server 212
It realizes.In other embodiments, one or more components of system 200 and/or the service provided by these components can also be by
One or more of client computing devices 202,204,206, and/or 208 are realized.Operate the use of client computing devices
Family can use one or more client applications then come using the service provided by these components.It can be with hardware, firmware, soft
Part or combinations thereof realizes these components.It should be appreciated that a variety of different system configurations for being different from distributed system 200 are can
Can.Therefore, embodiment shown in figure is an example of the distributed system for realizing embodiment system, and is not intended to
Limitation.
Client computing devices 202,204,206, and/or 208 can be portable handheld device (for example,
Cellular phone,Calculate flat board, personal digital assistant (PDA)) or wearable device (for example, GoogleHead
Head mounted displays), run such as Microsoft WindowsSoftware and/or various Mobile operating systems,
Such as iOS, Windows Phone, Android, BlackBerry 10, Palm OS etc., and can support internet, electronics postal
Part, SMS (Short Message Service) (SMS),Or the function of other communication protocols allowed.Client computing devices can be with
It is general purpose personal computer, for example including the Microsoft for running various versionsApple
And/or the personal computer and/or laptop computer of (SuSE) Linux OS.Client computing devices can be operation it is various can
Commercially availableOr the workstation computer of any of class UNIX operating system, the operating system include but unlimited
In various GNU/Linux operating systems, such as Google Chrome OS.Alternatively or additionally, client computing devices
202,204,206 and 208 any other electronic device, such as thin-client computer, the game system for supporting internet be can be
(for example, with or withoutThe Microsoft Xbox game console of gesture input device), and/or can be
The personal messages transfer device communicated on network (or multiple networks) 210.
Although exemplary distribution formula system 200 is shown as tool there are four client computing devices, can support to appoint
The client computing devices of what quantity.Other devices (such as device with sensor etc.) can be handed over server 212
Mutually.
Network (or multiple networks) 210 in distributed system 200 can be familiar to those skilled in the art any
The network of type can be supported using (including but not limited to TCP/IP (the transmission control of any of various commercially available agreements
Agreement/Internet protocol processed), SNA (System Network Architecture), IPX (internet packet-switched), AppleTalk, etc.) number
According to communication.Only as an example, network (or multiple networks) 210 can be local area network (LAN), such as based on Ethernet, token ring
And/or the like local area network.Network (or multiple networks) 210 can be wide area network and internet.It may include virtual
It is network (including but not limited to Virtual Private Network (VPN)), intranet, extranets, Public Switched Telephone Network (PSTN), infrared
Network, wireless network (for example, 802.11 protocol suite of any Institute of Electrical and Electronics Engineers (IEEE),
And/or the network operated under any other wireless protocols);And/or any combination of these and or other networks.These networks
Element can have any distance, it can be long-range or positioned jointly.Software defined network (SDN) can pass through non-intelligence
Can the combination of (dumb) router and the software run on the server realize.
Server 212 by one of the following or multiple can form:General purpose computer, dedicated server computer (example
Such as include personal computer (PC) server,Server, medium server, mainframe computer, rack installing type clothes
Be engaged in device, etc.), server farm, server cluster or any other suitable arrangement and/or combination.In various embodiments,
Server 212 can be adapted for running the service of one or more described in aforementioned disclosure or software application.For example, server
212 can correspond to execute the server of above-mentioned processing according to an embodiment of the present disclosure.
Server 212 can run including either one or two of among the above operating system and any commercially available server
Operating system.Server 212 can also run any of various additional server applications and/or middle-tier application, packet
Include hypertext transfer protocol (HTTP) server, File Transfer Protocol (FTP) server, common gateway interface (CGI) server,Server, database server etc..Exemplary database server include but is not limited to can from Oracle,
Those of commercially available from Microsoft, Sybase, International Business Machines (IBM) etc..
In some embodiments, server 212 may include one or more application, to analyze and integrate from client
The received data feeding of the user of computing device 202,204,206 and 208 and/or event update.As an example, data are fed
And/or event update can include but is not limited toFeeding,It updates or from one or more thirds
Square information source and the received real-time update of continuous data stream may include and sensing data application, financial telegraphic receiver, network
Performance measurement tool (such as network monitoring and traffic management application), click-stream analysis tool, mechanical transport monitoring etc. are relevant
Real-time event.Server 212 can also include one or more one or more applications, with via client computing devices 202,
204, one or more display devices in 206 and 208 show data feeding and/or real-time event.
Distributed system 200 can also include one or more databases 214 and 216.Database 214 and 216 can stay
Stay in various positions.As an example, one or more of database 214 and 216 may reside within the non-of 212 local of server
On temporary storage medium (and/or residing in server 212).Alternatively, database 214 and 216 may be located remotely from server
It 212 and communicates via based on network or dedicated connection with server 212.In one set of embodiments, database 214 and 216 can
To reside in storage area network (SAN).Similarly, as required, for executing times for belonging to the function of server 212
What required file is stored locally within server 212 and/or remotely stores.In one set of embodiments, 214 He of database
216 may include relational database, it is suitable for storing, update in response to order (for example, order of MySQL- format) or
Fetch data.Additionally or alternatively, server 212 can provide and support the big data processing to unstructured data, packet
It includes but is not limited to Hadoop processing, NoSQL database, graphic data base etc..In yet another embodiment, server 212 can be with
Execute the big data application of non-database type, including but not limited to machine learning.
Fig. 3 be show the embodiment of the present invention can be in the block diagram for the illustrative computer system wherein realized.System
300 can be used to implement any of above-mentioned computer system.As shown, computer system 300 includes processing unit
304, it is communicated via bus subsystem 302 with multiple peripheral subsystems.These peripheral subsystems may include that processing accelerates list
Member 306, I/O subsystem 308, memory bank subsystem 318 and communication subsystem 324.Memory bank subsystem 318 includes tangible calculating
Machine readable storage medium storing program for executing 322 and system storage 310.
Bus subsystem 302 provides various parts and subsystem phase intercommunication as expected for making computer system 300
The mechanism of letter.Although bus subsystem 302 is schematically depicted as single bus, the alternate embodiment of bus subsystem can be with
Utilize multiple buses.If bus subsystem 302 can be any of bus structures of dry type, including memory bus or
Memory Controller, peripheral bus and the local bus using any of various bus architectures.For example, such frame
Structure may include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, enhanced ISA (EISA) bus, video
Frequency electronic standard association (VESA) local bus, peripheral parts interconnected (PCI) bus, its can be implemented as with IEEE P1386.1 mark
The mezzanine bus or PCI of quasi- manufacture enhance (PCIe) bus.
Processing unit 304 controls the operation of computer system 300, can be implemented as one or more integrated circuit (examples
Such as, custom microprocessor or microcontroller).One or more processors may include in processing unit 304.These processors
It may include single or multiple core processor.In certain embodiments, processing unit 304 can be implemented as one or more independent
Processing unit 332 and/or 334 includes single or multiple core processor in each processing unit.In other embodiments, it handles
Unit 304 also can be implemented as the four core processing units formed and two dual core processors are integrated into one single chip.
In various embodiments, processing unit 304 can execute various programs in response to program code, and can keep
Multiple programs or process being performed simultaneously.At any given time, some or all for the program code to be executed can stay
It stays in processor (or multiple processors) 304 and/or memory bank subsystem 318.By programming appropriate, processor (or it is more
A processor) 304 it can provide above-mentioned various functions.Computer system 300 can extraly include processing accelerator module 306,
It may include digital signal processor (DSP), application specific processor, and/or such.
I/O subsystem 308 may include user interface input unit and user interface output device.User interface input dress
Set may include keyboard, the pointing device of such as mouse or trackball etc, the touch tablet or touch screen, rolling that are incorporated in display
Wheel, click wheel, turntable, button, switch, keypad, the voice input device with voice command recognition, microphone or
Other kinds of input unit.User interface input unit can for example including motion sensing and/or gesture identifying device, such as
MicrosoftMotion sensor allows user to say that the natural user interface of order comes by using gesture and mouth
Control input unit (such as Microsoft360 game consoles) and interact with the input unit.User connects
Mouth input unit can also include eye motion recognition device, such as GoogleApos detect to use by oneself
The eye activity (for example, " blink " when shooting photo and/or carrying out menu selection) at family, and eye motion is converted to
Input unit is (for example, Google) input.In addition, user interface input unit may include speech recognition sensing
Device, allow user by voice command and speech recognition system (for example,Omniselector) it interacts.
User interface input unit can also include but is not limited to:Three-dimensional (3D) mouse, operating stick or TrackPoint, game hand
Handle and figure plate and audio/video device, such as loudspeaker, digital camera, digital camera, portable media play
Device, web camera, image scanner, fingerprint scanner, barcode reader, 3D scanner, 3D printer, laser range finder
With eye gaze tracking device.In addition, user interface input unit can be for example including medical imaging input unit, such as calculate
Machine tomoscan, magnetic resonance imaging, position emission tomography, medical ultrasonic check device.User interface input unit may be used also
For example including voice input device, such as MIDI keyboard, digital music instruments etc..
User interface output device may include the non-of display subsystem, indicator light or such as audio output device etc
Visual displays etc..Display subsystem can be cathode-ray tube (CRT), board device (such as using liquid crystal display (LCD)
Or the board device of plasma display), projection arrangement, touch screen etc..In general, being intended to include using term " output device "
For from computer system 300 to user or the device and mechanism of all possible types of other computer output informations.For example,
User interface output device can include but is not limited to visually transmit the various aobvious of text, figure and audio/visual information
Showing device, such as monitor, printer, loudspeaker, earphone, auto-navigation system, plotter, instantaneous speech power and tune
Modulator-demodulator.
Computer system 300 may include memory bank subsystem 318 comprising be shown as being currently located system storage
Software element in 310.System storage 310 can store the program instruction that can be loaded and execute on processing unit 304, with
And the data generated during executing these programs.
According to the configuration and type of computer system 300, system storage 310 can be (such as depositing at random for volatibility
Access to memory (RAM)) and/or it is non-volatile (such as read-only memory (ROM), flash memories etc.).RAM is generally comprised
Data and/or program module can be operated and be executed by processing unit 304 by 304 immediate access of processing unit and/or.
In some cases, system storage 310 may include one or more four generation of double data rate (DDR4) dual-in-line storages
Device module (DIMM).In some embodiments, system storage 310 may include a variety of different types of memories, such as
Static random access memory (SRAM) or dynamic random access memory (DRAM).In some embodiments, input substantially/
Output system (BIOS) typically may be stored in ROM, and wherein BIOS includes and facilitates for example during starting in department of computer science
The basic routine of information is transmitted between element in system 300.As an example, not a limit, system storage 310, which is also shown, answers
With program 312, program data 314 and operating system 316, application program 312 may include client application, web browsing
Device, middle-tier application, relational database management system (RDBMS) etc..As an example, operating system 316 may include various versions
This MicrosoftApple And/or it is (SuSE) Linux OS, various commercially availableOr class UNIX operating system (including but not limited to GNU/Linux operating system, GoogleOS, etc.
Deng) and/or Mobile operating system, such as iOS,Mobile phone,OS、10OS、
AndOS operating system.
Memory bank subsystem 318 can also provide tangible computer-readable storage medium, provide for storing
The basic programming of the function of embodiment and data structure.Software (program, code of above-mentioned function are provided when being executed by a processor
Module, instruction) it can store in memory bank subsystem 318.These software modules or instruction can be held by processing unit 304
Row.Memory bank subsystem 318 can be provided for the repository of data used in storage according to the present invention
(repository)。
Memory bank subsystem 300 can also include computer-readable storage media reader 320, can further connect
To computer-readable storage medium 322.It is combined, is counted with system storage 310 together and optionally with system storage 310
Calculation machine read/write memory medium 322 can comprehensively indicate long-range, local, fixation, and/or removable storage device plus being used for
Temporarily and/or more for good and all include, store, transmitting and fetching the storage medium of computer-readable information.
The computer-readable storage medium 322 of part comprising code or code may include known in the art or make
Any suitable medium, including storage medium and communication media, such as, but not limited to, in any method or technology realize with
The volatile and non-volatile of storage and/or transmission information, removable and nonremovable medium.This may include tangible calculating
Machine read/write memory medium, such as RAM, ROM, electrically erasable ROM (EEPROM), flash memories or other memories
Technology, CD-ROM, digital versatile disc (DVD) or other optical storage bodies, cassette, tape, disk storage body or other magnetic are deposited
Storage device or other tangible computer-readable mediums.This also may include invisible computer-readable medium, such as number
Signal, any other medium for data transmission or can be used for transmitting desired information and being accessed by computing system 300.
As an example, computer-readable storage medium 322 may include:It is read from non-removable, non-volatile magnetic media
The hard disk drive for taking or being written to;From the disc driver that can be removed, non-volatile magnetic disk reads or is written to;From can
Remove, anonvolatile optical disk (for example, CD ROM, DVD andDisk or other optical mediums) it reads or is written to
CD drive.Computer-readable storage medium 322 can include but is not limited to,Driver, flash memories
Card, versabus (USB) flash drive, secure digital (SD) card, DVD disc, digital video cassette etc..It is computer-readable to deposit
Storage media 322 can also include solid state drive (SSD) based on nonvolatile memory, for example based on flash memories
SSD, enterprise's flash drive, solid-state ROM etc.;SSD based on volatile memory, such as solid-state RAM, dynamic ram, static state
RAM, the SSD based on DRAM, magnetoresistance RAM (MRAM) SSD and the group using DRAM and the SSD based on flash memories
The mixing SSD of conjunction.The computer that disk drive and its associated computer-readable medium can provide computer system 300 can
Reading instruction, data structure, the non-volatile memories of program module and other data.
Communication subsystem 324 provides the interface for arriving other computer systems and networks.Communication subsystem 324 is used as computer
System 300 receives data from other systems and is sent to it the interface of data.For example, communication subsystem 324 can permit calculating
Machine system 300 is connected via internet to one or more devices.In some embodiments, communication subsystem 324 may include
Radio frequency (RF) transceiver components, global positioning system (GPS) receiver parts, and/or other component, radio frequency (RF) transceiver
Component for access wireless voice and/or data network (for example, using cellular telephony, advanced data network technology, such as
3G, 4G or global evolution enhance data rate (EDGE), WiFi (802.11 series standard of IEEE or other mobile communication skills
Art), or any combination thereof).In some embodiments, alternatively or additionally as wireless interface, communication subsystem 324 can be with
Cable network connection (for example, Ethernet) is provided.In some cases, communication subsystem 324 can be realized in whole or in part
For one or more PCIe cards.
In some embodiments, communication subsystem 324, which can also represent, can be used one or more of computer system 300
A user receives the input of the forms such as structuring and/or non-structured data feeding 326, flow of event 328, event update 330
Communication.
As an example, communication subsystem 324 be configurable to from social networks and/or other communication services (for example,Feeding,It updates, such as rich site summary (Rich Site Summary, RSS) is fed
Web feeding) user receive data feeding 326 in real time, and/or in real time more from one or more third party's information sources
Newly.
In addition, communication subsystem 324 can be additionally configured to receive the data of continuous data stream form, it may include reality
When event flow of event 328 and/or event update 330, substantially can be continuous or unbounded without explicitly terminating.
The example for generating the application of continuous data can be for example including sensing data application, financial telegraphic receiver, network performance measurement work
Have (for example, network monitoring and traffic management application), click-stream analysis tool, mechanical transport monitoring etc..
Communication subsystem 324 is also configured as export structure and/or non-structured data feed 326, flow of event
328, event update 330 waits until one or more databases, the database can be coupled to one of computer system 300
Or multiple flow data source computer communications.
Computer system 300 can be in various types of one, including handheld portable devices (for example,
Cellular phone,Calculate flat board, PDA), wearable device is (for example, GoogleHead-mounted display), PC, work
It stands, host, information station (kiosk), server rack or any other data processing system.
Due to the continually changing property of computer and networks, the description of depicted computer system 300 only purport in figure
As a specific example.Compared with system shown in figure, many other configurations with more or less components are possible.
For example, it is also possible to using the hardware of customization and/or can realize spy with hardware, firmware, software (including small routine) or combination
Fixed element.Furthermore it is possible to use the connection of other computing devices (such as network inputs/output device).Based on this paper institute
The disclosure and introduction of offer, it will be appreciated by those skilled in the art that realizing the other modes and/or method of various embodiments.
It is as described above, the embodiment provides for management processing, memory, memory bank, network and
System and method of the cloud computing to significantly improve the efficiency and performance of processing node, wherein processing node is such as above-mentioned service
Device or any of other computers or computing device.Embodiment as described herein can be implemented as the set of hardware component,
It is real by breaking the artificial difference between processing, memory, memory bank, network and cloud in current commodity solution
Processing, memory and memory bank are changed in matter, the mode that network and cloud computing are managed, to significantly improve commodity hardware
Performance.For example, hardware element may include reference format memory module, such as dual-in-line memory module (DIMM),
It can be added to any of above-mentioned computer system.For example, memory module can be added to such as server
The commodity of node or " ready-made " hardware, and as the big data accelerator in the node.Component also may include one or more
Subject router.Subject router can for example including with memory module and one or more external object router (for example,
Rack installs router) it is added to the PCI high speed card of server node together.Subject router can be used for interconnecting two or more
Multiple servers or other nodes being adapted to memory module, and facilitate across these different server admins processing,
Memory and memory bank.Subject router can forward the part of object or object based on object address, and participate in object and deposit
Operation in reservoir structures.These hardware componenies can be with commodity server or other kinds of calculate node with any combination side
Formula is used together, to realize object memories structure framework.
Fig. 4 is to show the block diagram of exemplary object memory construction framework according to an embodiment of the invention.Such as
Depicted herein, framework 400 includes the object memories structure 405 for supporting any amount of application (App) 410a-g.It such as will be under
What face was more fully described, which may include any amount of processing node, such as be mounted with
One or more servers of one or more memory module as described herein.These nodes can pass through as described herein one
A or multiple internally and/or externally subject routers interconnect.Although described as including one or more servers, should infuse
Meaning, the processing node of object memories structure 405 may include any amount of a variety of different computers and/or calculating dress
It sets, it is suitable for operating in object memories structure 405 as described herein.
According to one embodiment, object memories structure 405 provides object-based storage device, in object memories knot
Memory object is managed in the memory of each node of structure 405 and in memory layer rather than in application layer.I other words can
Locally to realize and manage object and associated attribute in the node of object memories structure 405, so that without appointing
Be capable of providing in the case where what software the function of enhancing and by dynamically manage plant characteristic (including but not limited to, persistence,
Position and processing) it improves efficiency and performance.Object properties can also be traveled to using 410a-g.Object memories structure 405
Memory object can be used for eliminating the allusion quotation being applied in commodity server or the storage space of other nodes by address size
The limitation of type size.On the contrary, physical addressing can be managed in memory object itself, and object can pass through object in turn
Name space is accessed and manages.By the way that memory (interim) and memory bank (lasting) are realized and managed in object, object is deposited
The memory object of reservoir structures 405 can also be used to eliminate the difference between memory and memory bank.By being transparently managed
The position of object (or part of object), object memories structure 405 can also eliminate the area between local and remote memory
Not, thus all objects be all rendered as simultaneously for all nodes it is local.By the way that pair being placed in memory itself will be handled
The method of elephant, memory object can also carry out the difference between Processing for removing and memory.In other words, the embodiment of the present invention mentions
It has supplied to the one-level storage calculated and storage and storage and calculating are combined, it is directly and soft to eliminate spanning multilevel
The artificial expense of the multistage software overhead of part communication and mobile data to be processed.
According to these modes, by reduce or eliminate usually with the memory management at system and application software layer, storage
Body management, network, data directory and the associated expense of data buffer, the reality of object memories structure 405 as described herein
Transparent and dynamic performance can be provided and accelerate by applying example and its component, be answered especially for big data or other memory intensives
With.On the contrary, can significantly shorten between memory bank and memory and memory and place in storage level management memory object
Access between reason, to eliminate the relevant expense between each of which.
Embodiment provides consistency, hardware based, unlimited memory, accelerates as performance in memory
, across all nodes and expansible on all the nodes memory object is managed.This allowed based on object and end
The transparent dynamic property of end application accelerates.Using the framework of embodiment according to the present invention, can be regarded using with system software
To be as single standard server and simple, but extraly allow memory construction object heuristic to capture
(heuristics).Embodiment provides the accelerating ability of various dimensions, including local acceleration.According to one embodiment, with memory
The associated object memories structural metadata of object may include trigger, and object memories structure framework is made
Data localize and move the data into quick DRAM memory before use.Trigger may be such that storage system can
The basic summary of arbitrary function is executed based on memory access.Various embodiments can also include instruction set, which can
To be provided only based on the trigger defined in metadata associated with each memory object for object memories structure
One demand model, and support core to operate and optimize and allow in memory construction in a manner of highly-parallel more efficiently
Execute the memory-intensive part of application.
Embodiment can also by replacing complicated application, memory bank and network stack with a small amount of memory reference,
To reduce software path length.When memory and memory bank can be used as memory directly addressing under the embodiment of the present invention,
This point may be implemented.Embodiment can additionally provide the accelerating ability of high-level storage operation.It is right for many situations
It moves the data into processor as the embodiment of memory construction framework can be eliminated and is moved back into memory (it for having at present
There are three or more be very inefficient for the modern processors of the cache of rank) needs.
Fig. 5 is to show the block diagram of exemplary memory structure object memories according to an embodiment of the invention.
More specifically, this example illustrate the application views that how can organize memory construction object memories.Memory construction pair
As address space 500 can be 128 bit linear address spaces, wherein object ID corresponds to the beginning of addressable objects.Object
510 can be from 212To 264The variable-size of byte.Since object memory bank is allocated on the basis of every piece, address
Space 500 efficiently can sparsely utilize in object or across object.The size of object space 500 must be sufficiently large, so that
It does not need garbage reclamation and disjoint system can be made easily to combine.
Object metadata 505 associated with each object 510 can be relative to object address space 500 it is transparent,
And it can use object memories structure to manage the block in object and object, and can be stored by application 515a-g by object
The Application Programming Interface (API) of device structure is accessed with permission appropriate.The API provides setting and keeping object for application
The function of memory construction, such as the Linux libc function library by using modification.Lead to too small amount of extra work, such as
The application of SQL database or graphic data base can use API to create memory object, and provide and/or extend object meta
Data, to allow object memories structure preferably to manage object.Object metadata 505 may include object method, pass through
Dynamic object-based processing, distribution and parallelization realize that performance optimizes.Metadata can make each object have can
The security strategy of definition, and encapsulation is accessed in object.
According to an embodiment of the invention, the list for capturing its work and/or persistant data can be accessed now using 515a-g
A object (for example, App0 515a), or multiple objects (for example, App1 515b) is accessed for finer granularity.It answers
With can also be with shared object.Object memories 500 according to these embodiments can (it will be below with reference to by physical organization
Fig. 6 is more fully described) and object memories dynamic combination, physically realize the simple application view of this strength.
In general, object memories 500 can be organized as distributed hierarchy, be that object memory bank and application 515a-g are created
Build layering neighborhood.Object memories dynamic interacts with laminated tissue and using laminated tissue, carrys out dynamic creation object
Local variable (locals) and the application (object method) operated on object.Since object method can be with memory object
Associated, when object is migrated and replicated in memory construction, object method obtains what object size was guaranteed naturally
Increased concurrency.In conjunction with object dynamic layered structure can size based on object method and dynamic further create
The neighborhood of neighborhood.
Fig. 6 shows exemplary object memory dynamic according to an embodiment of the invention and physical organization
Block diagram.As shown in this example, object memories structure 600 as described above may include any amount of 605 He of processing node
610, it is communicatively coupled via one or more external object routers 615.Each of node 605 and 610 can be with
Including internal object router 620 and one or more memory modules.Each memory module 625 may include supporting to appoint
The node object memory 635 using 515a-g of what quantity.In general, memory module 625, node object router
Subject router 615 can share the common function and its index about object memories 635 between 620 and node.In other words, bottom
Layer design object can be reused in all threes, to provide in addition to those of being suitable for being illustrated by way of example embodiment party
The universal design of the hardware for being also adapted to any one of various different form factors and type except formula.
More specifically, node may include individual node subject router 620 and one or more memory modules 625
With 630.According to one embodiment, node 605 may include commodity or " ready-made " server, memory module 625 may include
Such as reference format memory card of dual-in-line memory module (DIMM) card, and node object router 620 can be similar
Ground includes the reference format card of such as peripheral components high speed interconnection (PCIe) card.Object may be implemented in node object router 620
Index, the object memories which covers the memory module 625 and 630 in same node point 605 are (or multiple right
As memory) object/block for being kept in 635.Each of memory module 625 and 630 can keep practical object and
Block, corresponding object metadata in object and cover the object for being currently locally stored in the object of the memory module
Index.Each of memory module 625 and 630 can independently manage DRAM memory (quick and phase in the following manner
To valuableness) and flash memories (less fast but considerably cheaper), that is, the processor (not shown) of node 605 thinks exist
The quick DRAM of flash memory quantity.Memory module 625 and 630 and node object router 620 can by with other
The free bank index that identical mode is realized is indexed to manage free memory bank.It can be by processor cache and processing
Device memory reference instruction carrys out direct access storage module 625 and 630 by the DDR memory bus of standard.According to this
Conventional memory reference instruction can be used only without with implicit in kind of mode, the memory object of memory module 625 and 630
Or explicit input/output (I/O) instructs to be accessed.
The object memories of each node 625 can be created and kept by object memories structure API (not shown)
Object in 635.Node object router 620 can be by the object memories structure version of the modification of libc function library and right
As memory construction driver (not shown) is communicated with API.Then, node object router 620 can according to need update originally
Ground object indexing, towards root (root) (i.e. towards subject router 615 between node) send order, and with memory mould appropriate
Block 625 or 630 is communicated, locally to complete api command.Memory module 625 or 630 can send back management request
The node object router 620 of management request can suitably be handled.
According to one embodiment, the inside structure of node object router 620 can be very similar to memory module
625, wherein difference is related to routing function, such as management node memory object index and to or from memory module
The suitable package of the routing of subject router 615 between 625 and 630 and node.I other words node object router 620 can have
Additional routing function, but do not need practically to store memory object.
Subject router 615 may be considered that similar to ip router between node.However, first difference be using
Addressing model.Ip router uses each node fixed static address, and fixed object is routed to based on purpose IP address
Manage node.However, subject router 615 utilizes memory construction object address (OA) between the node of object memories structure 600,
The OA specifies the specific block of object and object.Object and block can dynamically reside at any node.Object road between node
OA package can be routed based on the dynamic position (or multiple dynamic positions) of object and block by device 615, and real-time dynamicly
Track object/block position.Second difference is that object memories structure distribution formula agreement may be implemented in subject router, should
Agreement provide object/block position and object function dynamic property, for example including but be not limited to trigger.Object routes between node
Device 615 can be implemented as the expansion version of node object router 620, have increased object indexing memory capacity, processing speed
Rate and total routing bandwidth.In addition, subject router 615 may be coupled to multiple node object routers and/or multiple between node
Subject router between other nodes, without being attached to single PCIe or other buses or channel to be connected to memory module.
According to one embodiment, node object router 620 can be by the PCIe and memory bus (not shown) of node 605
Direct memory access (DMA) to communicate with memory module 625 and 630.The node object router of different nodes 605 and 610 can
To be connected in turn by subject router 615 between high speed network (not shown) and one or more nodes, high speed network is, for example,
Using the several layers transmitted by the IP tunnel of standard Gigabit Ethernet agreement or object memories infrastructure protocol 25/
100GE optical fiber.Subject router can be with identical network connection between multiple nodes.
In operation, memory construction object memories can use the combination of physical organization and object memories dynamic
It realizes above with reference to its powerful simple application view described in Figure 4 and 5.According to one embodiment and as introduced with reference to Fig. 5
, memory construction object memories can be organized as distributed hierarchy, the layered structure formed object memory bank and
Using the layering neighborhood of 515a-g.Node object router can keep tracking which object or the part of object is for neighborhood
Local.Actual object memories can be located at close to application 515a-g and memory construction object method node 605 or
On 610.
Same as introduced above, object memories dynamic can interact with laminated tissue and utilize laminated tissue,
The application (object method) for carrying out the local variable of dynamic creation object and being operated on object.Since object method can be with
Object is associated, and when object is migrated and replicated on node, object method obtains the increasing that object size is guaranteed naturally
The concurrency added.In conjunction with object dynamic the object hierarchies can so that size and dynamic creation based on object method
The neighborhood of neighborhood.
For example, App0 515a crosses over the object memories structure neighborhood (being node 605 in this case) of single rank
Interior multiple memory modules 625 and 630.Object movement can rest in the neighborhood and its node object router 620, and
Without any other communication link or router.The Self-Organization of the neighborhood limited along layered structure provide from performance and
The efficiency of the angle of minimum bandwidth.In another example, App1 (Al) 515b can have identical characteristic but different
In neighborhood, that is, in node 610.App2 (A2) 515c can be across two level hierarchy neighborhoods (that is, node 605 and 610)
Parallel application.Interaction can have a style of one's own in respective neighborhood.
As described above, some embodiments may include data type and metadata framework, some embodiments can also include
Promote the data type and metadata framework of multiple advantages of the invention.About framework, the following description disclose following aspects:
Object memories structure address space;Object memories structural integrity object address space;Object memories structure distribution formula
Object memories and index;Object memories configuration index;Object memories structure objects;And the instruction execution mould of extension
Type.Various embodiments may include any one of these aspects or combination.
Fig. 7 is to show the object memories structure hierarchy of object memories according to an embodiment of the invention
Aspect block diagram, by working set localize and allow virtually limitless scalability.As disclosed herein, certain implementations
Example may include so that object memories structure is dynamically operated with provide the machine of object memories application view and
Data type.Machine and data type facilitate the class fractal property of system, allow system with the side unrelated with scale
Formula carries out identical performance.In the example being painted, object memories structure 700 disclosed herein may include any quantity
Processing node 705 and 710, coupled via one or more external object routers in higher level communication, it is external
Subject router is, for example, subject router 715, and the external object router can be in turn coupled to it is one or more higher
The subject router of rank.
Specifically, system can be by node, the fat tree that (or multiple root nodes) constructs from leaf node to root node.According to
Whether some embodiments, each node can only understand whether its range includes object, and understood based on this by request/response
It is routed to root or leaf.These nodes are put together and enable the system to be dynamically extended to any capacity, it is any without influencing
The operation or viewpoint of node.In some embodiments, leaf node can be is added by the DIMM that standard memory chip constructs
The object memories structure 700 realized in FPGA.In some embodiments, standard memory chip can be with embedded object memory
Structure 700.In various embodiments, embodiment can have remote node, such as mobile phone, unmanned plane, automobile, Internet of Things
Net means, and/or such.
In order to promote the various favorable properties of object memories structure 700, some embodiments can use consistent sex object
Memory construction address space.Following table 1 identifies the various aspects of the address space of some embodiments according to the disclosure
Non-limiting example.According to some embodiments, it is connected to the local of single object memories structure 700 or all sections of distribution
Point can be considered as a part of individual system environment.As shown in table 1, object memories structure 700 can provide unanimously
Sex object address space.In some embodiments, 128 object address spaces can be provided.However, other embodiments are possible
's.Large object address space has several reasons, including the following contents.Object address space is directly uniquely to object memories
All memories on all nodes, memory bank in structural system are addressed and manage, and are object memories structure system
Conventional memory bank except system provides unique address.Object address space can permit address and be only used once and never carry out
Garbage reclamation, this is main efficiency.Object address space can permit distribution address space and distribution memory bank between into
Row is distinguished.In other words, object address space, which can sparsely be used as, has simplicity, performance and the effective technology of flexibility.
As further shown in table 1, object memories structure 700 can directly support every process virtual address space and
Physical address space.By some embodiments, the virtual address space and physical address space of every process can be with x86-64 framves
Structure is compatible.In certain embodiments, the span of single virtual address space may be in the single instance of Linux OS, and can
It can be usually overlapped with individual node.Object memories structure 700 can enable identical virtual address space cross over more than single
A node.Physical address space can be actual physical memory addressing (for example, in some embodiments, in x86-64 node
It is interior).
Fig. 8 is shown according to the object address space 805 of some embodiments of the disclosure, virtual address 810 and physics
The block diagram of example relationship 800 between address 815.For object address space 805, the size of single object can be in range
Variation.As an example, not a limit, the magnitude range of single object can be from 2 Mbytes (221) to 16 petabytes (petabyte)
(264).Other ranges are possible.In some embodiments, in object memories structure 700, object address space 805 can
To be distributed on the basis of object granularity.In some embodiments, can on the basis of 4k block of bytes (for example, block 806,
807) memory bank is allocated.Therefore, in some embodiments, object address space block 806,807 can correspond to x86-
4k byte page size in 64 frameworks.When creating object address space 805, address space and object meta number may be only existed
According to.When distributing memory bank on the basis of every piece, data are can store in corresponding piece of object.Can with sparse or
Non- sparse mode distributes block memory, and block memory of allocating in advance and/or distribute according to need.For example, in some embodiments,
Object can be used as hash function in software, and only distributes physical memory banks for effective Hash.
With reference to the example of Fig. 8, can be in the node 820,825 of General Server in some embodiments, can with
On the basis of the corresponding dynamic of virtual address 810, distribution corresponds to the physical page of physical address 815.Since object stores
Device structure 700 provides the physical storage in node 820,825 actually by object memories structure DIMM, when distribution is virtual
When address field 811,812,813,814, the object address space corresponding to particular segment 811,812,813,814 can also be created
805 objects.This makes the identical or different virtual address 810 on node 820,825 address or access identical object.Object
The actual physical address 815 that interior block/page is resident in node 820,825 can be in node 820,825 or cross-node
820,825 application software is pellucidly changed over time.
The some embodiments of object memories structure 700 can provide key advantage:The implementation of object memories structure 700
Example can provide integrated addressing, the object (not needing rotation (swizzle)) with transparent constant pointer and cross-node and visit
Ask that method-some embodiments of big address space are compatible with x84-64, Linux and application.In general, system has many differences
Address (for example, for individual address space, sector, cylinder, physical disk, Database Systems, file system, etc.
Deng storage address), this needs to convert between different address layers, buffer and the significant software of mobile object and block is opened
Pin.It is constant across all node/systems by addressing single-stage, using integrated addressing come block in addressing object and object and
Using object naming space, each layer of software is eliminated.In the case where sufficiently large address space, an address system and spy
The all these systems determining database application and working together are not constant.
Therefore, node may include memory module, which can store and manage one or more storages
Device object, wherein to each of one or more memory objects memory object, be based at least partially on single-stage pair
As the object address space that addressing scheme distributes on the basis of every object, to manage the physical address of memory and memory bank.
Node is configurable to be operatively coupled to one or more additional nodes using the object addressing scheme, using as right
It is operated as a group node of memory construction, wherein the group node is operable so that all memories pair of the group node
As be based at least partially on the object addressing scheme be it is accessible, the object addressing scheme be one or more of memories
Object limits constant object address, and the object address is relative to one or more memory objects in memory module
Physical storage storage location and memory bank change in location are constant and cross over all with object memories structure interfaces
Module is constant.Correspondingly, regardless of whether object is in individual server, object address is in module and leap and object
All modules of memory construction interfaces are constant.Even if object can store in any or all module, no matter when
Preceding or which physical memory location object will be stored in, object address is still constant.Certain implementations presented below
The details of example, can be provided the advantage that by object address space and object address space pointer.
The some embodiments of object memories structure 700 can support a variety of various pointer formats.Fig. 9 is to show basis
The block diagram of example relationship 900 between the object size 905 and object address space pointer 910 of some embodiments of the disclosure.
Following table 2 is identified to be shown according to the non-limiting of aspect of the object address space pointer 910 of some embodiments of the disclosure
Example.As shown in table 2, some example embodiments can support three kinds of pointer formats.Object address Space format can be object and deposit
Local 128 bit formats of reservoir structures, and can provide for the offset in any object and object with complete addressability
Single pointer.Object memories structure 700 can support additional format, for example, two kinds of additional formats of 64 bit formats, with
Allow the direct compatibility with x86-64 virtual memory and virtual address.Once by object memories structure API (some
In embodiment, can in Linux libc function library for application pellucidly handle) establish object memories structure objects and
Relationship between virtual address section, standard x86linux program can be efficiently and pellucidly using x86-64 addressing mechanism come directly
Data in reference object (x86 sections).
Following table 3 identifies the object address space pointer according to some embodiments of the disclosure relative to object size
Aspect non-limiting example.The embodiment in object address space can support a variety of sections of sizes, for example, such as following table 3
Shown in from 221To 264Six kinds of section sizes.Object size corresponds to x86-64 virtual memory section and big page-size.Object
It can be since 0 object size boundary of mould.Object address space pointer 910 can be decomposed into ObjStart and ObjOffset word
Section, ObjStart and ObjOffset field size depend on object size as shown in following example.ObjStart field is corresponding
In the object address space of object, ObjectID is also corresponded to.ObjOffset is the model from zero to (ObjectSize-1)
In enclosing without value of symbol, specify the offset in object.Object metadata can specify object size and object address space pointer
910 object memories interpretation of structure.The object of arbitrary size and degree of rarefication can be only by for interested piece in object
Memory bank is distributed to specify.
Due to the property of major applications and the object property of object memories structure 700, most of addressing can be phase
For object.In some embodiments, all object memories structures address pointer format can be locally stored by processor
And load.In some embodiments, object can directly utilize x86- with respect to (Object Relative) and object virtual address
64 addressing modes work.Object virtual address pointer can be or be included in x86-64 sections and corresponding object memories knot
The process virtual address to work in structure object.It can carry out computing object storage by the way that object virtual address is used as object offset
Device structure objects address space.Object relative pointer can be or including the offset in x86-64 virtual address section, therefore benchmark
Addressing mode is indexed ideally to run.It can carry out computing object memory construction by the way that object is used as object offset relatively
Object address space.Following table 3 identify it is according to some embodiments of the disclosure, from object virtual address pointer or object
Relative pointer generates non-limiting example of 128 object address spaces as the details of the function of object size.
As disclosed herein, some embodiments may include object memories structure distribution formula object memories and rope
Draw.Using distributed index, each node can be indexed the block of native object and object on the basis of every object.It is right
As some embodiments of memory construction distributed objects memory and index can be based at least partially on cellular automaton and
The cross concept of fat tree.Existing distributed hardware and software systems with real-time dynamic index use two methods:It concentrates
Formula index or distributed single conceptual index.The embodiment of object memories structure can be used to be covered on fat tree hierarchical network
Cover the new method of independent local index function.
Figure 10 is the example object memory construction distributed objects memory shown according to some embodiments of the disclosure
With the block diagram of index structure 1000.It is that any amount of processing node 1005 and 1010 objects are deposited at the leaf (leaf) of structure 1000
Reservoir 1035.These object memories 1035 can have the object that description is currently locally stored in object memories 1035
With the object indexing of the part of object.(it can be in individual node some object memories 1035 in some embodiments
DDR4-DIMM interface compatibility card) and 1040 logical connection of object memories structure node object indexing.Object memories structure section
Point object index 1040 can have the current object being locally stored and/or be currently stored in object memories 1035 of description
With the object indexing of the part of object.In some embodiments, object memories structure node object indexing 1040 can be with example
Turn to PCIe card.By some embodiments, object memories structure objects memory DDR4-DIMM and object memories structure section
Point object index PCIe card can be communicated by PCIe and memory bus.
In some embodiments, the working method and object memories of object memories structure node object indexing 1040
Object indexing in 1035 is similar, deposits in addition to object memories structure node object indexing 1040 tracks any object connected
The part of all objects and object in reservoir 1035, and by the part mapping of object and object to special object memory
1035.Upward next rank is the node object Router Object rope that can be provided by object memories structure router in tree
Draw 1020, all object memories structure node object indexings 1040 that the object memories structure router connects it
Execute identical object indexing function.Node object Router Object index 1020 can have description and currently be locally stored in
The object indexing of the part of object and object in lower rank (for example, 1040,1035).Therefore, according to some embodiments, road
Catalogue and router feature can have by device module, and memory module can have catalogue and router feature and storage
The memory function of memory object.However, other embodiments are possible, and in alternative embodiments, router-module
Can extraly have the memory function of storage memory object.
It can continue to subject router object rope between the node of another higher level by the mode shown in structure 1000
Draw 1015, subject router object indexing 1015 can be provided by object memories structure router, the object memories structure
The identical object indexing function of all object memories structure node object indexings execution that router connects it, and with
This analogizes the root until tree.Therefore, in certain embodiments, each object indexing and each rank can be executed independently identical
Function, but be that tree network can provide the real-time dynamic with very big scalability and divide by object indexing and grade polymeric
Cloth index effectively tracks and localizes memory object and block.Additional attribute can be:Tree, distributed index and
The combination of cache makes it possible to significantly decrease bandwidth requirement.This can illustrate by the neighborhood hierarchically indicated,
It is leaf (in this case downwards) that the neighborhood delimited by object memories structure router.With the grade layered of restriction
Do not increase, aggregate objects memory cache ability is consequently increased.Therefore, when application working set fits in given rank
When in aggregate capacity, it can be become zero in the rank towards the bandwidth requirement of root.
As disclosed herein, each processing node is configured to be operatively coupled to one or more using one group of algorithm
A additional processing node is operated using the scale independently of group processing node as one group of processing node.The group section
Point can be operated, so that any node that all memory objects of the group node can be handled node by the group accesses.
At processing node, object storage module can store and manage memory object (each of which locally be instantiated wherein
And be managed in memory layer) and object directory (it carries out rope to memory object and its block on the basis of every object
Draw).Memory module can be based at least partially on one or more object directories to handle request, which can be from application
Layer receives.In some cases, which can receive from one or more additional processing nodes.In response to the request, give
Fixed memory module can handle the object identifier for corresponding to given request, and can determine whether memory module has
Requested object data.If memory module has requested object data, memory module can be at least partly
Ground generates the response to the request based on requested object data.If memory module does not have requested number of objects
According to then the first request can be routed to another node in tree by object routing module.The routing of the request can be at least partly
The determination of the position about object data is made in response to the request based on object routing module in ground.If object routing module
The catalog function of object routing module is based at least partially on to identify position, then object routing module can to the position (that is,
The other node of lower level with requested object data) request is routed downwards.However, if object routing module determines
Position be it is unknown, then object routing module can route the request to root node (that is, to one or more higher levels
Subject router between subject router-node), to be further determined in each rank, until requested object is determined
Position accesses and returns to original memory module.
In addition, as disclosed herein, triggering can be limited in object metadata for the block in object and/or object
Device.Object-based trigger can predict that anything will be needed to operate, and can provide acceleration by executing operation in advance.
When node receives the request of specified an object (for example, with 128 object address), node is determined using object directory
Whether the node has any part of the object.If it is, then object directory direction can be used for being locally-located sense it is emerging
Every (per) object tree of the block of interest (one is individually set, the object-based size of size).There may be additional triggering members
Data, when specific interested piece be sent to/transport through memory module when, which is specific interested
Block instruction explains specific address in a predefined way.Trigger can exist for one or more data blocks in object
Specify one or more predefined hardware and/or software operation (for example, obtaining particular address, operation more on the basis of every piece
The trigger of complexity executes pre-acquiring, calculates this other three block and sends signal etc. to software).Trigger can be right
Should in hardware mode, when any memory module to flow through object memories structure with object need mobile data and/
Or other act before dynamically mobile data and/or execute other movements.Correspondingly, when with one or more triggers
Specific memory object is located at corresponding memory module and the corresponding storage as other one or more requests of processing
When a part of device module is accessed, such movement may be implemented.
Figure 11 and Figure 12 be show according to some embodiments of the disclosure, object indexing distributed nature how with
Exemplary block diagram that the layering of object memories infrastructure protocol is operated and interoperated, on logic level.Object memories knot
The some embodiments of structure protocol hierarchy may be similar with conventional batch communication protocol, but has significant differences.Communication protocol can be with
It is substantially stateless, but the embodiment of object memories infrastructure protocol with keeping object state and can be directly realized by distribution
Formula and parallel execution, coordinate without any concentration.
Figure 11 is shown to be deposited according to the complete object executed in object memories 1135 of some embodiments of the disclosure
Reservoir hit situation 1100.Object memories 1135 can receive processor request 1105 or backstage trigger activity 1106.It is right
As memory 1135 can be managed based on processor physical address using local DRAM memory as cache 1130.Most
Common situation may be that requested physical address exists, and it is returned to processor immediately, as shown at 1110.Object
Memory 1135 can be used trigger and pellucidly be moved to data in quick DRAM memory from slower flash memories,
As shown in 1115.
The case where for miss 1115 or backstage trigger activity 1106, some embodiments may include one in following
A or combination.In some embodiments, object memories structure objects address can be generated from physical address, such as 1140 institute of block
Show.Object indexing can generate the position in local flash memory memory from object address space, as shown in block 1145.Object indexing
Lookup can be accelerated by two methods:(1) for indexing the hardware based auxiliary searched;And (2) object indexing is searched
Result in local cache.Object memories instruction cache consistency is determined for whether local state is enough
Meet expected operation, as shown in block 1150.Based on index, can execute lookup to determine the block in object and/or object is
It is no be it is local, as shown in block 1155.Under 1160 the case where hitting, it can transmit and correspond to request 1105 or trigger activity
1106 data, as shown in 1165.Also, in some embodiments, when buffer status is enough, it can make block high speed is slow
The decision being stored in DRAM.
Figure 12 shows the case where object memories miss according to some embodiments of the disclosure 1200 and object
The distributed nature of memory and object indexing.Object memories 1235 can undergo previously mentioned step, but route/certainly
Determine the stage 125 to determine object and/or block not to be local.It is traversed up as a result, algorithm can be related to request towards root
1270 trees, until finding object/block.Any amount of rank and corresponding node elements can be traversed, until finding pair
Until/block.In some embodiments, in each step along path, the same or similar process steps can be followed
Independently to determine the next step on path.Central co-ordination is not needed.In addition, if disclosed herein, object memories knot
Structure API and trigger usually execute in leaf, but can execute in a distributed way in any index.
As simplification example, in the present case, request from the object memories knot for corresponding to object memories 1235
Structure node object index 1240 traverses up 1270 to subject router 1220.Subject router 1220 and its subject router pair
As index can will request object/block be identified as it is downward from branch towards object memories structure node object indexing 1241.Cause
This, at the index of subject router 1220, then which direction can supply object/block leaf (or multiple leaves)
Carry out routing 1275.In the illustrated example, object memories 1236 can supply the object/block.In object memories 1236
Place, can executing memory access/cache 1241, (it may include the mistake described before for the hit situation of execution
Journey step), and the object/block can return to 1280 to raw requests leaf 1235 to carry out finally returning that 1290.Again, one
In a little embodiments, in each step along path, the same or similar process steps can be followed independently to determine path
On next step.For example, raw requests leaf 1235 can execute the process steps 1285 for describing before hit situation, so
The data of 1290 requests are returned afterwards.
As disclosed herein, the operation of single object memories configuration index structure, object memories configuration index knot
Structure can be based on the several layers of identical tree embodiment.It can be in object memories structure using some embodiments of tree construction
It is interior that there are several purposes, as described by the following table 4.However, various other embodiments are possible.
Figure 13 is to show storing in view of object memories structure distribution formula object according to some embodiments of the disclosure
The exemplary block diagram of the leaf grade object memories structure 1300 of device and index structure.In some embodiments, leaf grade object stores
Device structure 1300 may include one group of nested B-tree.Root tree can be object indexing tree (object index tree, OIT)
1305, object existing for local can be indexed.The index of object indexing tree 1305 is with can be object memories structure objects
Location, because object is since object size mould 0.For each object, (object is at least locally stored in object memories
Single block), may exist an object indexing tree 1305.
Object indexing tree 1305 can to one or more every object index trees (per object index tree,
POIT) 1310 one or more pointers (for example, local pointer) is provided.For example, each native object can have every object rope
Draw tree 1310.Every object index tree 1310 can be with index object metadata and the block for belonging to object existing for local.Every object rope
Draw 1310 leaves of tree and is directed toward DRAM 1315 and corresponding metadata and block (for example, based on offset in object) in flash memory 1320.
The leaf of specific piece can be directed toward both DRAM 1315 and flash memory 1320, such as in the case where leaf 1325.Object metadata sum number
According to tissue further disclose herein.
Tree construction used can be the modified B-tree of copy-on-write (copy-on-write, COW) with open arms.
COW is a kind of optimisation strategy, when most of data are not modified, can make multiple tasks efficiently shared information and
Without replicating all memory banks.Modified piece is stored in new position by COW, and it is suitable for flash memories and high speed are slow
It deposits.In certain embodiments, tree construction used can be similar to the tree construction of open source linux file system btrfs, main region
Be not for single object/storage space utilization, hardware-accelerated and the foregoing description independent local index polymerization
Ability.By multiple layers using B-tree, there can be the shared and less fluctuating change of higher degree.Such as file system and
The application of database purchase body manager can use the operation that this bottom efficient mechanism carries out higher level.
Figure 14 is the object memories structure Router Object index structure shown according to some embodiments of the disclosure
1400 exemplary block diagram.By some embodiments, object memories structure Router Object index and node object index can
To use each object the almost the same structure of object indexing tree 1405 and every object index tree 1410.Object indexing tree
1405 can index object existing for local.Each object described in object indexing tree 1405 can have every object indexing
Tree 1410.Every object index tree 1410 can index block and section existing for local.
Object memories structure Router Object index and node object index can be to being present in tree construction 1400
The block in object and object in subitem 1415 is indexed, and subitem 1415 is sub-router (or multiple sub-routers) or leaf pair
As memory.The entry in leaf in every object index tree 1410 has multiple pieces of the ability indicated in object.Due to object
Block may tend to flock together naturally, and due to back-stage management, each object is intended to closer to tree root
It is indicated more compactly in object indexing.Object indexing tree 1405 and every object index tree 1410 can permit in object and block
Repeat replication is realized in grade, this is because multiple leaves can be directed toward identical piece, such as such as the situation in leaf 1425 and 1430.
Index copy-on-write (COW) support, which makes for example to can be only object, updates modified piece.
Figure 15 A and 15B be according to the block diagram of the non-limiting example of the index tree construction of some embodiments of the disclosure,
Tree construction 1500 and leaf index tree 1550 are indexed including node.The further non-limiting example of the various aspects of index tree field
It is identified in the following table 5.Other embodiments are possible.Each index tree may include joint block and leaf block.Each joint block or
Leaf block can include the entry of variable number based on type and size.Type specifies the class of node, joint block, leaf and/or leaf block
Type.
Size (Size) independently specifies the size of LPointer and Index Val (or object offset).In the tree of balance
Interior, single block can be directed toward all joint blocks or all leaf blocks.In order to transmit peak performance, tree may become uneven, example
Such as in the case where the quantity of the rank by all paths set is equivalent.Joint block and leaf block can provide field to support
Unbalanced tree.Background activity may rebalance the tree as other consistency operations a part.For example, the inside in OIT
Node (n omicronn-leaf) may include LPointer and NValue field.NValue may include object size and Object ID.
Object ID needs 107 positions (128-21) to specify the smallest possible object.Each LPointer can with inwardly directed node or
Next rank of leaf node.LPointer may need enough positions to indicate that it is locally stored all pieces in body (about
32 expression 16TB).For the node in POIT, NValue can be made of the object offset based on object size.Object is big
It is small to be encoded in NSize field.Size (Size) field can enable a node to possess based on service condition
LPointer the and NValue field of maximum quantity.Index root vertex can store multiple positions on multiple flash memory devices
Place, to realize the reliable cold start-up of OIT.The update of tree root block can replace between mirror image, to provide abrasion equilibrium.
Under default situations, each POIT leaf entry can be directed toward the position of single block (for example, 4k byte).POIT leaf OM item
Mesh and POIT leaf router (POIT Leaf Router) entry may include the field for allowing to support to exceed single block, from
And the page-size by the way that persistent storage body can be matched, it realizes the index tree more compressed, lead to higher index tree performance
With higher persistent storage performance.
Node and leaf can be by distinguishing in type (Type) field at the beginning of each 4k block.NNize field can
With the size of the NValue field in coding nodes, and LSize field can encode the size of the LValue field in leaf.
The size of LPointer field can be by being fixed for the sheet of single device (for example, RDIMM, node router or router)
The physical addressing of ground memory bank determines.LPointer can be only effective in single device, rather than straddle mounting is set effective.
LPointer can specify corresponding block whether be stored in long-time memory (for example, flash memory) or higher speed memory (such as
DRAM in).The block being stored in DRAM also can have the memory bank distributed in long-time memory, thus presence instruction block,
Two entries of two storage locations of node or leaf.In single block type, all NValue and/or LValue fields can be with
It is single size.
OIT node may include several node level fields (Type (type), NSize and LParent) and save including OIT
The entry of point (OIT Node) entry or OIT leaf (OIT Leaf) entry.Since index tree sometimes can be uneven, node can be with
Including both node entries and leaf entry.POIT node may include one or more node level fields (for example, Type, NSize
And/or LParent) and entry including OIT leaf entry.OIT leaf type can be distinguished by otype field.OIT leaf
(object indexing table leaf) can be directed toward the head of the POIT (every object concordance list) of specified object block and object metadata.OIT leaf R
The long-range head of POIT can be directed toward.This can be used for quoting the object across network resident on remote-control device.The leaf can be with
Remote-control device is set to manage object.
POIT leaf (POIT Leaf) type can be distinguished by ptype field.POIT leaf OM can be directed toward object memories
Block or metadata.Object offset field can be one more than digit, to specify the offset of special object size, to refer to
Determine metadata.For example, for 221Object size, it may be necessary to 10 positions (9 plus 1).Embodiment can choose with two
Complement code (two's complement) form (sign format, first piece of metadata are -1) indicates to deviate, or with one benefit
Code (one's complement) and additional position indicate metadata to indicate that (first piece of metadata is indicated by 1, and set for offset
Surely there is metadata position).
POIT leaf long-range (POIT Leaf Remote) can be directed toward far from DIMM, the object memories block in local or member
Data.This can be used for crossing over block of the network resident on remote-control device by fluidisation package interface reference.For example, the device can
To be mobile device.The leaf can enable object memories constructional hardware to manage on a block basis for remote-control device
Consistency.
POIT leaf router (POIT Leaf Router) can between node object router and node subject router
Interior use, for specifying corresponding object memories block structure object address for up to each of 16 downstream nodes
State.If at most can specify up to 16 DIMM in some embodiments (or at other in node object router
It is more in embodiment).If between node in subject router, can specify up to 16 in some embodiments (at it
In his embodiment more) downstream router or node object router (for example, server node).Block object address can be based on
Effective combinations of states and be present in one or more downstream units.
Index is searched, index COW updates and indexes cached can be in object memories, node object index and object
It is directly supported in object memories constructional hardware in memory construction router.In addition to object memories configuration index
Except node format, the index of application definition can also be supported.These can be carried out initial by object memories structure API
Change.The advantages of index of application definition, can be:The index based on object memories constructional hardware can be supported to search, COW more
Newly, indexes cached and concurrency.
Various embodiments can be provided for consistency operation and garbage reclamation.Due to each of in object memories structure
DIMM and router in local and can be independently completed consistency operation in the catalogue and memory bank for locally keeping their own
And garbage collection.Each DIMM or router can have the memory hierarchy for storing index tree and data block,
May include on chip cache, fast storage (for example, DDR4 or HMC DRAM) and its can manage it is slower non-
Volatile memory (for example, flash memory) and index tree.
Each rank in layered structure can execute following operation:(1) tree balance searches the time to optimize;(2) it quotes
When counting and aging, moved with each piece of determination between different bank;(3) freedom of each local rank layered
List update and the fill level parameter for keeping local Major grades layered;(4) periodical fill level is delivered
To next stage layered, enable between the DIMM on local server and object memories structure hierarchy
Rank between realize memory bank load balance;(5) if it is router, then load balance is carried out between child node.
Block reference count can be used to indicate the relative frequency of access in object memories structure.Higher value can indicate
It more frequently uses at any time, the use of lower value instruction less frequency.Block in block reference count and long-time memory
When associated, the block with minimum can be moved to another DIMM of more available spaces or the candidate of node.
When being accelerated in volatile memory block, reference count can increase.If object is not in volatile memory, low
Frequency background scanning can successively decrease the value, if object in volatile memory, can be incremented by the value.It is contemplated that sweeping
Retouching algorithm can develop into based on fiducial value come increasing or decreasing at any time, to provide lag appropriate.Frequently accelerate to or
The block being present in volatile memory can have higher reference count value.
When block reference count is associated with the block in volatile memory, with minimum block can be moved back to it is another
The candidate of a DIMM or long-time memory or memory in node.When block is moved in volatile memory, can be based on
Start mobile instruction or use-case to initialize reference count.For example, demand miss can set the values to midpoint, and push away
The property surveyed, which extracts (speculative fetch), can be set to a quarter point.Single use may be set to
Lower than a quarter point.The frequency background scanning of appropriateness may be such that fiducial value successively decreases.Therefore, requirement extract can be initial
When be weighted higher than speculative fetch.If not using speculative fetch, may be rapidly decreased to be replaced first
The lower fiducial value changed.Single use can be weighted it is lower, using as the replacement candidates than other blocks earlier.Therefore, single
Other blocks frequently accessed may not be substituted using with predictive piece.
Figure 16 is the block diagram for showing the aspect of the exemplary physical storage organization 1600 according to some embodiments of the disclosure.
Object memories structure can provide the method for a variety of access objects and block.For example, direct method can be based on object memories
Execution unit in structure or device, the execution unit can directly generate complete 128 bit memory structure address, can
It is completely directly accessed with having.
The General Server with limited virtual address and physical address space can be considered in associated method.Object is deposited
Reservoir structures can provide API, come object (for example, section) and block (for example, page) and biggish object memories dynamically
128 bit memory structure address of structure is associated.By AssocObj and AssocBlk operate provide association can by with standard
The object memories fabric driver (for example, linux driver) and object memories knot of processor storage management interfaces
Construction system library (Syslib) use so that object memories structure can to operating system and application show as it is transparent.Object
Memory construction can provide:(a) range of processor section and its virtual address is associated with object memories structure objects
API, so that it is guaranteed that seamless pointer and virtual addressing compatibility;(b) page of virtual address space and corresponding object are deposited
(this can be true by reservoir structures block API associated with the page/block of local physical storage in object memories structure DIMM
Protect processor storage management and physical addressing compatibility);And/or (c) it is divided into the sheet of standard normal server dimm socket
Ground physical storage, each dimm socket have 512GB (239Byte).On the basis of each slot, as shown below, object is deposited
Reservoir structures can retain by the object of the object memories structure address with associated each piece of corresponding physical address
The additional catalogue of allocation index is managed, as shown below.
Figure 16 is the block diagram for showing the exemplary physical storage organization 1600 according to some embodiments of the disclosure.For object
Reason memory 1630 physical storage catalogue 1605 may include:Object memories structure objects block address 1610;Object is big
It is small by 1615;Reference count 1620;The field 1625 of modification, can indicate whether block is modified relative to long-time memory;With/
Or enabled (write enable) 1630 is write, it can indicate whether this plot cached state is sufficient for being written.For example,
If cached state is duplication, write-in may be prevented from, and object memories structure will likely have enough shapes
State is to be written.In object memories structure DIMM SPD (serially there is detection) configuration based on guidance (boot), object
Reason address range can distribute to each by system bios.
Figure 17 A is the block diagram according to the example object of some embodiments of disclosure addressing 1700.Figure 17 B is to show root
According to the block diagram of the exemplary aspect of object memories structured fingers and the block addressing 1750 of some embodiments of the disclosure.Object storage
Device structure objects 1705 may include object data 1710 and metadata 1715, and the two is divided into 4k block in some embodiments
As a unit of storage distribution, quoted by object memories structure address space 1720.Object initial address can be
ObjectID 1755.Data 1710 can be used as the positive offset from ObjectID 1755 to access.Maximum offset can be with base
In ObjectSize 1760.
Object metadata 1715 can be used as the negative offset from ObjectStart 1725 (object ID) to access.Metadata
1715 can also be by the object memories structure address reference in the top 1/16 in object address space 1720.Special object member number
According to starting can be 2128-2124+ObjStart/16.The arrangement can enable POIT compactly to indicate metadata 1715,
And make metadata 1715 that there is object address space, so as to consistently manage it as data.Although can be pair
Image data 1710 and metadata 1715 distribute complete object address space, but can be deposited based on block sparsely to distribute
Chu Ti.On bottom line, in some embodiments, object 1705 has individually depositing for first piece of distribution for metadata 1715
Store up block.Object Access permission can be determined by object memories structured file system ACL etc..Due to object memories structure
Object is managed as unit of 4k block, so the addressing in object memories structure objects memory is known as block object address
The block address of 1765 (BOA) corresponds to object address space [127:12].BOA[11:0] it can be used for by object memories
ObjectSize(BOA[7:0]) and object metadata indicates (BOA [2:0]).
Figure 18 is to show the frame of the exemplary aspect 1800 of the object metadata 1805 according to some embodiments of the disclosure
Figure.Following table 6 indicates first piece 1810 of metadata of the metadata 1805 of each some embodiments.In some embodiments
In, first piece 1810 of metadata 1805 can preserve the metadata of object as shown in the figure.
The metadata that system defines may include any data relevant to Linux, so that cross-server is seamlessly coordinated
The use of certain objects.The metadata of application definition may include from file system or database purchase body manager with answer
With relevant data, to realize by the search and/or relationship between the object of the application management.
For the object with a small amount of trigger, trigger be can store in first piece;Otherwise, trigger B-tree
Root can quote the metadata profile region of corresponding object.Trigger B-tree leaf can specify trigger.Trigger
It can be single trigger action.When needing more than individual part, trigger can be called.When trigger is called,
They may reside in extended area.Remote object table can specify can be accessible from the object by expansion instruction set
Object.
Some embodiments can provide extended instruction and execute model.The target that extension executes model can be to provide gently
The dynamic mechanism of magnitude come provide memory and execute concurrency.The dynamic mechanism realizes data flow and executes method, makes it possible to
It is enough to combine the concurrency of height with the tolerance of the variation of the access delay of the part of object.It can be based on actual correlation
(dependency) work is completed, rather than based on the single access delay for stopping over the calculating.
Various embodiments may include below a kind of or combination.Load and memory reference, which can be, to be had individually
The fractionation affairs of request and response, so that thread and memory path are not utilized during entire affairs.It per thread and holds
Row unit can issue multiple loads to (local and remote) object memories structure before a response is received.Object storage
Device structure can be processing from multiple requests in multiple sources and the assembly line of response, so as to make full use of memory to provide
Source.Execution unit can be to receive response from the different sequence of request is issued.Execution unit can be switched to different threads
To be fully used.Object memories structure strategy may be implemented with dynamically determine when the part of mobile object or object,
Rather than mobile thread or creation thread.
Figure 19 is to show the frame of the various aspects of exemplary micro- threading model 1900 of some embodiments according to the disclosure
Figure.Thread can be the basic unit of execution.Thread can come at least partially through instruction pointer (IP) and frame point (FP)
Definition.Instruction pointer can specify the present instruction being carrying out.Frame point can specify the position of the current execution state of thread
It sets.
Thread may include multiple microwire journeys.In the example shown, thread 1905 includes microwire journey 1906 and 1907.So
And thread may include greater number of microwire journey.The microwire journey of particular thread can share identical frame point, but have not
Same instruction pointer.In the example shown, frame point 1905-1 and 1905-2 specifies identical position, but instruction pointer 1910
Different instructions is specified with 1911.
One purpose of microwire journey can be by enabling multiple asynchronous (pending) storage operations co-pending and enable line
The operation of data stream type in journey.The version that microwire journey can be instructed by bifurcated (fork) creates, and can be by being added
(join) instruction rejoins.Expansion instruction set can be worked as frame point by executing operation to the offset from frame point
Make the top of storehouse or set of registers.Load and store instruction can between frame and object mobile data.
Figure 20 is the various aspects of the example relationship 2000 of the code for showing some embodiments according to the disclosure, frame and object
Block diagram.Specifically, Figure 20 is illustrated how through frame 2010 come reference object data 2005.Default condition can be load and
Store instruction reference object 2005 in local scope.By access control and security strategy, can provide in a secured manner
Access to the object 2005 beyond local scope.Once giving this access, so that it may be accessed with identical efficiency local
With the object 2005 in non-local range.Object memories structure is by encouraging efficient object encapsulation, to encourage powerful peace
Quan Xing.By sharing frame, microwire journey provides the mechanism of very lightweight, to realize dynamic and data flow memory and execute simultaneously
Row, for example, the magnitude of about 10-20 microwire journey or more.Multithreading realize it is virtually limitless based on memory and
Row.
Figure 21 is the block diagram for showing the exemplary aspect of micro- thread concurrency 2100 of some embodiments according to the disclosure.
Specifically, Figure 21 shows the parallel data stream concurrency for the simple examples summed to the value of several random sites.According to
The some embodiments of the disclosure, serial version 2 105 and parallel version 2110 are arranged side by side.Parallel version 2110 almost can fast n
Times, because load is overlapped parallel.
Referring again to Figure 20, this method can expand to interactive and recursion method in a dynamic fashion.Can not have now
There is use to prefetch, with the smallest locality in the case where, realization the advantages of prefetching in advance.When an object is created, single default
Thread 2015 (single microwire journey 2020 is created) can wait the beginning message to be dealt into default thread 2015 as opening
Begin.Then thread creation microwire journey can be used in default thread 2015, or create new thread using the version of fork.
In some implementation examples, both instruction pointer and frame point can be limited to extended metadata region 1815,
Extended metadata region 1815 is since block 2 and extends to a section size (SegSize)/16.With the quantity of object, object size
With the increase of object capacity, thread and micro- thread parallelism can increase.Due to thread and microwire journey can with object binding, with
Object is mobile and distribution, thread and microwire journey can also be moved and be distributed.The embodiment of object memories structure can have
By the dynamic select for being partially moved to thread or thread is assigned to (multiple) object of object or object.This can by by
Extension executes the encapsulated object method of model realization to facilitate.
As further noted above, the embodiment of the present invention can also include object memories organization instruction collection, can
To provide unique demand model based on the triggering of core operation and optimization is supported, and allow in object memories structure
The memory-intensive part of application is more efficiently executed in a manner of highly-parallel.
Due to several characteristics, object memories organization instruction collection can be enabled (date-enabling) of data.It is first
First, by by conventional processors, object memories structure movement, another sequence or explicit object memories structure API Calls into
Capable data access, can neatly triggering command sequence.Second, sequence can be random length, but short sequence can be more
Efficiently.Third, object memories organization instruction collection can have the memory size of height multithreading.4th, object memories
Organization instruction integrates can provide efficient conllinear journey (co-threading) as conventional processors.
The embodiment of the present invention is instructed including two classes.First kind instruction is trigger instruction.Trigger instruction includes being based on
To the single instruction and movement of the reference of special object address (OA).Trigger instruction can call extended instruction.Second class refers to
Order is extended instruction.Extended instruction defines any parallel functionality from API Calls to complete high-level software function.Right
After instruction set model is discussed, these two types of instructions will be successively discussed.As previously mentioned, the scene except no trigger
Under, efficient single-use memory correlation function is realized in trigger instruction.
Using metadata defined above and trigger, the execution model based on memory data stream may be implemented.The mould
Type can indicate that dynamic dataflow executes method, wherein based on the true correlation of memory object come executive process.This is provided
The memory of height and concurrency is executed, this provides the appearance of the variation of the access delay between memory object in turn
Limit.In this model, instruction sequence is executed and is managed based on data access.These sequences can be random length,
But short sequence is more efficient and provides higher concurrency.
Expansion instruction set realizes that efficient, height is threading, the execution in memory.Instruction set is imitated in several ways
Rate.Firstly, instruction set may include direct object address process and generation, without the expense and software of multiplexed address conversion
Layer manages different address spaces.Second, instruction set may include direct object certification, without that can be based on safety the
Tripartite authenticates the run-time overhead of software setting.Third, instruction set may include that the relevant memory of object calculates.For example, with
Object it is mobile, calculating can move together with which.4th, instruction set may include the scale that is based on and be dynamic for activity
And transparent concurrency.5th, instruction set may include that integrated storage can be used with object memories structure operation
Device instruction set is realized, so that memory lines are that can customize for application demand.6th, instruction set can handle memory
In memory intensive calculate catalogue function.Addition operation when this is included in contact memory.Possible operation can wrap
It includes but is not limited to search, image/signal processing, encryption and compression.Inefficient interaction with conventional processors significantly decreases.
Extended instruction function can be calculated for memory intensive, be mainly used for memory reference and based on these
The simple operations of reference, the memory reference are used for the interested size issue greater than cache or main memory.One
A little examples can include but is not limited to:
It is macro that API is defined from conventional processors.
Define the interactive stream between the layered part of object memories structure.Kernel instruction sequence can be used in each component
Column collection is to realize object memories structure function.
Accelerate the short sequence of the crucial application core such as BFS (breadth first search) for macro.BFS is search graph
The core strategy of shape, and by graphic data base and a large amount of uses of figure application.For example, BFS is used for various problem spaces,
To find most short or optimal path.It is a representativeness for illustrating the challenge (that is, without locality) for analyzing extensive figure
Algorithm, this is because figure is greater than cache and main memory and almost all of work is drawn by memory
With.In the case where BFS, extended instruction ability as described herein is coupled with thread, by the recurrence example of thread to be based on figure
Shape size and enabled node search for adjacent list, to handle almost entire BFS.Processing in the direct memory of highly-parallel and
Level memory operations can reduce software path length.When the above-mentioned object memories structural capacity of combination is so that all data exist
In memory and when being localized it before the use, the performance and efficiency of each node are significantly increased.
Complete layer function, such as:
Zero for constructing the storage engines of the hierarchical file system on the top of planar object memory.Storage engines example
The engine of (multiple) objects and information appropriate is stored, handles and fetched out of object in this way.For MySQL, object may be
Table.For file system, object can be file or catalogue.For graphic data base, object can be figure, and information can be with
It is made of vertex and edge.The operator of support can be such as object-based type (file, figure, SQL).
Zero storage engines for the structured database of such as MySQL
Zero storage engines for the unstructured data of such as graphic data base
Zero storage engines for the storage of NoSQL key assignments
Complete application:The unstructured number of the structured database of file system, such as MySQL, such as graphic data base
According to or NoSQL key assignments storage
User-programmable.
According to one embodiment, reference flip-flop can call single trigger action based on the reference to specific OA.
Each OA can have a reference flip-flop.When needing more than one movement, trigger function can be used
(TrigFunction) reference flip-flop calls trigger program.The instruction that reference flip-flop can be included by following table 7
Composition.
As described above, trigger instruction set may include trigger conditions based on a specified and act inclined in specified object
The block specified in pointer is obtained at shifting.Trigger instruction secondary format can be expressed as:
Trigger PtrType TrigType TrigAction RefPolicy ObjOffset
The example collection of the operand of trigger instruction set is included in following table 8-12.
As described above, TrigFunction (or TriggerFunct) instruction set may include inclined in specified data object
When the trigger conditions moved and specified, trigger program is executed since specified metadata offset place.TriggerFunct can
More complicated sequence is instructed than pending single trigger to realize.TrigFunct instruction binary format can be expressed as:
TrigFunct PtrType TrigType MetaDataOffset ObjOffset
The example collection of the operand of trigger instruction set is included in following table 13-16.
According to one embodiment, extended instruction can be explained with 64 blocks of 3 kinds of formats, including short (each word 2 fingers
Enable), long (each word single instruction) and retain.
In general, the combination of trigger and expansion instruction set can be used for defining any, parallel function, such as:Directly
It object address processing and generates, the expense and software layer without multiplexed address conversion manage different address spaces;Directly
Object authentication is connect, without the run-time overhead that can be arranged based on safe Third Party Authentication software;The relevant storage of object
Device calculates, wherein calculating can move together with which when object moves among the nodes;And based on scale and activity come
Say it is dynamic and transparent concurrency.These instructions are divided into three concept classes:Memory reference, including load, storage and spy
Different memory construction instruction;Control stream, including bifurcated (fork), addition (join) and branch (branch);And execute, including
Arithmetic sum compare instruction.
The different types of of memory reference instruction is listed as follows shown in table 18.
Pull (pulling) can be used to instruct in object memories structure, replicate or be moved to as by the block specified
The request of (such as local) memory bank.The state that can be specified according to the priority specified by priority with pull_state,
To request the 4k block of bytes operand in the object of the object offset specified by src_offset specified by src_oid.
Data can then be instructed by push and be moved.Pull instruction binary format can be expressed as:
The exemplary operations manifold of Pull instruction set, which is closed, to be included in the following table 19-23.
Push (push) instruction can be used, the block specified is replicated or be moved to remote location from body is locally stored.
The state that can be specified according to the priority specified by priority with pull_state, to request to specify by src_offset
Object offset at the object specified by src_oid in 4k block of bytes operand.Data can previously by pull instruction Lai
Request.Push instruction binary format can be expressed as:
The exemplary operations manifold of Push instruction set, which is closed, to be included in the following table 24-28.
PushAck (push confirmation) or Ack (confirmation) instruction can be used for associated with Push piece of confirmation at one
Or multiple positions are received.4k word in the object of the object offset specified by src_offset specified by src_oid
Locking nub operand can be identified.Ack instruction binary format can be expressed as follows:
The exemplary operations manifold of Push instruction set, which is closed, to be included in the following table 29-31.
Load (load) instruction set includes the object specified in the object offset specified by src_offset by src_oid
In osize operand.Src_offset can be written to the word offset from the frame point specified by dst_fp.Load refers to
Dummy status is ignored in order.
The exemplary operations manifold of Load instruction set, which is closed, to be included in the following table 3 2-36.
Store (storage) instruction set includes the word specified by src_fp, which can be truncated big to be specified by osize
It is small and can be stored in the object specified at the displacement of dst_offst by dst_oid.For example, only storing ssize byte.
Dummy status is ignored in Store instruction.Store instruction binary format can be expressed as:
The exemplary operations manifold of Store instruction set, which is closed, to be included in the following table 3 7-41.
ReadPA (reading physical address) instruction reads 64 bytes by the physical address of local storage module.By src_pa
Operand in specified object can be written to the word offset from the frame point specified by dst_fp.ReadPA instruction two
System format can be expressed as:
The exemplary operations manifold of ReadPA instruction set, which is closed, to be included in the following table 4 2-44.
64 bytes are written by the physical address of local storage module in WritePA (writing physical address) instruction.By src_
64 fp specified bytes are stored in the physical address specified by dst_pa.ReadPA instruction binary format can be expressed as:
The exemplary operations manifold of WritePA instruction set, which is closed, to be included in the following table 4 5-47.
Each word in object memories structure objects can include the shape for indicating empty (empty) or full (full) state
State.Dummy status conceptually means that the value of corresponding word has been cleared.Full state conceptually means the value of corresponding word
It has been filled.This state can be used in certain instructions, to ensure that only one thread can read or be written indivisiblely
The word.Do-nothing instruction can be similar to load instruction to operate, as shown in following table 48.
Osize operand in the object that the object offset specified by src_offset is specified by src_oid can be with
The word offset being written to from the frame point specified by dst_fp.Emptying (empty) instruction binary format can be expressed as:
Empty instruction set exemplary operations manifold, which is closed, to be included in the following table 4 9-52.
Each word in object memories structure objects can include the state for indicating empty or full state.Dummy status is general
Mean that the value of corresponding word has been cleared in thought.Full state conceptually means that the value of corresponding word has been filled.It is certain
This state can be used in instruction, to ensure that only one thread can read or be written the word indivisiblely.Fill (is filled out
Fill) instruct binary format that can be expressed as:
The operation of Fill instruction is similar to store, as shown in following table 53.
The word specified by src_fp can be stored in the object that the offset of dst_offst is specified by dst_oid.Only
Store ssize byte.Store ignores dummy status.The exemplary operations manifold of Fill instruction set, which is closed, to be included in the following table 5 4-57.
Pointer (pointer) instruction set can to object memories structure specify ptr_type pointer can be located at by
In the object specified at src_offset specified object offset by scrod.The information can be utilized by object memories structure,
Data for pre-phase are mobile.Pointer instruction binary format can be expressed as:
Pointer instruction set exemplary operations manifold, which is closed, to be included in the following table 5 8-61.
Prefetch Pointer Chain (pre-acquiring pointer chain) instruction set can be based on the object specified by src_oid
In the strategy specified by policy, in the range specified by src_offset_st to src_offset_end.By src_
The osize operand in object specified at offset specified object offset by src_oid can be written to from by dst_
The word offset that fp specified frame point rises.Load ignores dummy status.PrePtrChn instruction binary format can be expressed as:
Prefetch Pointer Chain instruction set exemplary operations manifold, which is closed, to be included in the following table 6 2-66.
Scan and Set Empty or Full (scan and set and is empty full) instruction set can by specified strategy by
It is initialized in the object specified at src_offset specified offset by src_oid.Scan can be used for carrying out breadth First or depth
First search is spent, and empties or fill next available position.ScanEF instruction binary format can be expressed as:
The exemplary operations manifold of Scan and Set Empty or Full instruction set, which is closed, to be included in the following table 6 7-71.
Create (creation) instruction set includes the object memories structure objects of specified ObjSize, pair with OA
As the initiation parameter of ID and DataInit and Type.There is no data block memory that can distribute, and can distribute and be used for
The memory bank of first meta data block.The binary format of Create instruction can be expressed as:
Create Type Redundancy ObjSize OID
The exemplary operations manifold of Create instruction set, which is closed, to be included in the following table 7 2-75.
CopyObj (duplication object) instruction set includes that the source object specified by SOID is copied to the purpose specified by DOID
Ground object.If DOID is greater than the object of SOID, all DOID blocks beyond SOID size are replicated to unallocated.If
SOID is greater than the object of DOID, then duplication is terminated with DOID size.CopyObj instruction binary format can be expressed as:
CopyObj Ctype SOID DOID
The exemplary operations manifold of CopyObj instruction set, which is closed, to be included in table 76-78.
CopyBlk (copy block) instruction set includes will be since cnum source block SourceObjectAddress (SOA)
Copy to the destination since DestinationObjectAddress (DOA).If cnum block extends beyond related to SOA
The object size of connection, then undefined block is replicated to unappropriated.CopyBlk instruction binary format can be expressed as:
CopyBlk ctype cnum SOA DOA
The exemplary operations manifold of CopBlk instruction set, which is closed, to be included in the following table 7 9-82.
Allocate (distribution) instruction set includes the memory bank to the object specified by OID.Allocate instructs binary system
Format can be expressed as:
Allocate init ASize OID
The exemplary operations manifold of Allocate instruction set, which is closed, to be included in the following table 8 3-85.
It includes the memory bank to the cnum block since OA that Deallocate, which (deallocates) instruction set,.Divide if released
With the end for reaching object, then termination is operated.Deallocate instruction binary format can be expressed as:
Deallocate cnum OA
The exemplary operations manifold of Deallocate instruction set, which is closed, to be included in the following table 86 and 87.
Destroy (destruction) instruction set includes complete all data and the first number deleted and corresponded to by the OID object specified
According to.Destroy instruction binary format can be expressed as:
Destroy OID
The exemplary operations manifold of Destroy instruction set, which is closed, to be included in the following table 88.
Table 88.OID- object ID |
Description |
The object ID of object to be deleted |
It includes any modified piece for retaining specified OID that Persist, which (retains) instruction set,.Persist instruction two into
Format processed can be expressed as:
Persist OID
The exemplary operations manifold of Persist instruction set, which is closed, to be included in the following table 89.
Table 89.OID- object ID |
Description |
The object ID of object to be retained |
AssocObj (affiliated partner) instruction set includes that object OID is associated with VaSegment and ProcessID.It will
OID is associated with VaSegment to enable the address ObjectRelative and ObjectVA correct by object memories structure
Ground access.AssocObj instruction binary format can be expressed as:
AssocObj OID ProcessID VaSegment
The exemplary operations manifold of AssocObj instruction set, which is closed, to be included in the following table 9 0-92.
Table 90.OID- object ID |
Description |
The object ID of object to be associated |
Table 91.ProcessID- process ID |
Description |
Process ID associated with VaSegment |
Table 92.OID- object ID |
Description |
The object ID of object to be associated |
DeAssocObj (de-association object) instruction set includes by object OID and VaSegment and ProcessID Xie Guan
Connection.If ProcessID and VaSegment ProcessID and VaSegment associated with former and OID is mismatched, can
To return to mistake.DeAssocObj instruction binary format can be expressed as:
DeAssocObj OID ProcessID VaSegment
The exemplary operations manifold of DeAssocObj instruction set, which is closed, to be included in the following table 9 3-95.
Table 93.OID- object ID |
Description |
The object ID of object to de-association |
Table 94.ProcessID- process ID |
Description |
Process ID associated with VaSegment |
Table 95.OID- object ID |
Description |
The object ID of object to de-association |
AssocBlk (associated block) instruction set includes that block OA is associated with local physical address PA.This allows object to store
Device is associated with PA block by object memories block structure, to access for native processor.AssocBlk instructs binary format
It can be expressed as:
AssocBlk place OA PA LS[15:00]
The exemplary operations manifold of AssocBlk instruction set, which is closed, to be included in the following table 9 6-99.
DeAssocBlk (de-association block) instruction set includes by block OA and local physical address PA de-association.Then the OA will
It can no longer be accessed from local PA.DeAssocBlk instruction binary format can be expressed as:
DeAssocBlk OA PA
The exemplary operations manifold of DeAssocBlk instruction set, which is closed, to be included in the following table 10 0 and 101.
Table 100.OA- object memories block structure object address |
Description |
The block object address of block to de-association |
Table 101.PA- physical block address |
Description |
The local physical block address of block to de-association.Corresponding to the operand 2 in encapsulation header |
OpenObj (open object) instruction set be included on the basis of consulting (advisory) with by TypeFetch with
The object that CacheMode specified mode cache is specified by OID.OpenObj instruction binary format can be expressed as:
OpenObj TypeFetch CacheMode OID
The exemplary operations manifold of OpenObj instruction set, which is closed, to be included in the following table 10 2-104.
OpenBlk (open block) instruction set include by by TypeFetch and CacheMode it is specified in a manner of cache by
OID specified block (or multiple pieces).When prefetching the end beyond object, termination is prefetched.OpenBlk instructs binary format can
To be expressed as:
OpenBlk TypeFetch CacheMode OID
The exemplary operations manifold of OpenBlk instruction set, which is closed, to be included in the following table 10 5-107.
The exemplary operations manifold of Control Flow (control stream) (short instruction format) instruction set, which is closed, is included in the following table 10 8
In.
Fork (bifurcated) instruction set provides the command mechanism for creating new thread or microwire journey.Bifurcated is specified for newly creating
The new command pointer (NIP) for the thread built and new frame point.At the end of fork instruction, thread (or the microwire of the instruction is executed
Journey) and new thread (such as microwire journey) be incremented by 1 with fork_count (counting) and run.If new FP does not have with old FP
Relationship can be considered as new thread, or otherwise be considered as new microwire journey.Fork instructs binary format can be with table
It is shown as:
The exemplary operations manifold of Fork instruction set, which is closed, to be included in the following table 10 9-113.
Join (addition) is the command mechanism of the new thread or microwire journey of creation.Join instruction set enables microwire journey to move back
(retire) out.Join instruction is successively decreased fork_count (counting), and fork_count is greater than zero, then does not move further
Make.If fork_count is zero, indicate that the microwire journey for executing join is the micro- of the last one generation of the fork_count
Thread, and continued to execute in next sequential instructions with the FP specified by FP.Join instruction binary format can be expressed as:
The exemplary operations manifold of Join instruction set, which is closed, to be included in the following table 11 4-117.
Branch (branch) instruction set allows to add branch or other normal instructions.Branch instructs binary format can be with
It is expressed as:
The exemplary operations manifold of Execute (execution) (short instruction format) instruction set, which is closed, to be included in the following table 11 8.
Object memories structure stream and API
Object memories structure stream facilitates a kind of mechanism, and object memories structure realizes that band is distributed using the mechanism
Distributed consensus (coherent) object memories of formula object method.According to some embodiments, object memories structure stream
A kind of general mechanism can be defined, the hardware and software module in any combination is communicated in a single direction.Ring
Shape stream can support that pipeline system annular tissue, the ring of two of them module can be exactly two one-way flows.
The API of stream format can be at least partly defined as two one-way flows.Therefore, as mentioning in some embodiments
For a part of infinite memory structure framework, the API of stream format can use to execute between two or more modules
Communication, the stream format API defines communication based in part on object memories structure stream protocol, so that the communication is
Based on different one-way flows.
Each stream can be logically made of instruction package.Each instruction package may include extended instruction and incidence number
According to.In some embodiments, each stream can make the sequence requested and responded interweave (interleave).Stream may include short
Package and long package.Short package herein can referred to as " instruction package ", it can describe to include bookkeeping
(bookkeeping) the instruction package of information and order.Short package may include Pull or Ack instruction and object information.With
The short package (" instruction package ") for not carrying object data is different, and long package is properly termed as " object data package " herein,
Its object data package that can describe carrying object data.Object data package may include one or more push instruction,
Object information and the single block specified by object address space block address.Every other instruction and data can be passed in block
It send.
In some embodiments, for example, short package can be 64 bytes (1 piece), and long package can be 4160 bytes (65
Block).However, other embodiments are also possible.In some embodiments, there may be separators (for example, the separation of 1 byte
Symbol).Object storage organization stream can by be similar to UDP in a manner of be connectionless, and can be effectively embedded in udp protocol or
In agreement with the UDP type with the same or similar certain characteristics of UDP.In various embodiments, attribute may include with
It is any one of lower or combinations thereof:
Request-response towards affairs, enable to the name of effectively mobile object memory construction (for example,
128 object memories structure objects addresses) data block.
It can be with block-based position, request object memories structure objects address (object address space) and object storage
Device organization instruction routes package, rather than routes package based on class static IP node address.
Consistency and object memories infrastructure protocol can be directly realized by.
Reliability can be provided in end-to-end (end-to-end) agreement of object memories structure.
It is connectionless.
Unique state can be each piece of coherency state at each end node in system, this can
To summarize it to improve efficiency at object memories structure routing node.
According to some embodiments of the disclosure, the following table 11 9 identifies the non-limiting of the various aspects that short package defines and shows
Example.
According to some embodiments of the disclosure, the following table 12 0 identifies the non-limiting of the various aspects that long package defines and shows
Example.
According to some embodiments of the disclosure, the following table 12 1 identifies the non-limiting of the various aspects of object size coding
Example.
Object based on software and/or hardware can be with interface (interface) to 2 one-way flows, one, each direction stream.
Depending on object, can there are more low level protocol layers, including encryption, verification and and reliable link protocol.Object is deposited
Reservoir structures stream protocol, which is provided, matches request response package to (and time-out), to enhance the envelope of traversal any amount stream
The reliability of packet.
In some cases, each request-response package to averagely be about 50% short package and 50% long package,
Average efficiency relative to block transmission is 204%, utilizes following equation:
Efficiency=1/ (50%*4096/ (40+4136))
=1/ (50%* block size/(small packet size+big packet size))
For the link with random error rate, it can use reliable link protocol and carry out local detection mistake.
Node ID (Node ID)
Object address space (object memories structure objects address) can dynamically be present in object memories structure
Any object memories in, can also be with dynamic migration.Still it can have a kind of (or, for example needing) mechanism, so that object
Memory and router (being referred to as node) can communicate with each other for many purposes, and a variety of purposes include asking to original
The person of asking carries out bookkeeping, setting and maintenance.NodeID field in package can be used for these purposes.DIMM and router can be with bases
It is addressed in its laminated tissue.When lower effective field is zero, nonleaf node can be addressed.DIMM/ software/movement field
Up to 256 or more DIMM and remaining agent software thread and/or mobile devices may be implemented.This addressing scheme is most
It can mostly support up to 240A server or server equivalent device, up to 248A DIMM and up to 264A mobile device or software
Thread.The example of these fields is as shown in following table 122-124.
Some embodiments according to the present invention, the following table 12 5 and 126 identify confirmation field (acknowledge field)
With the non-limiting example of the various aspects of details.
According to some embodiments of the disclosure, the following table 12 6 identifies confirmation detail field (acknowledge detail
Field the non-limiting example of various aspects).Confirm that detail field can provide corresponding requests based on package instruction field
Detailed status information.
In some embodiments, the topology used in object memories structure can be unidirectional point-to-point (point-
To-point) ring.However, in various embodiments, stream format will support other topologys.Logic box may include hardware, firmware
And/or any combination of software flow object interface.Double object (two-object) rings may include that two between object are unidirectional
Stream.The object for being connected to multiple rings can have the ability for moving, translating and/or generating package between the rings, to create object
Memory construction layering.
Figure 22 A be show it is according to some embodiments of the disclosure, be present in hardware based object storage organization
The exemplary block diagram of stream between node on the node 2200 of subject router 2205.In some embodiments, node 2200 can be with
Corresponding to server node.Subject router 2205 may include the ring being connected with annular orientation with physical streams 2215 between node
Object 2210.In various embodiments, ring object can connect in ring 2220, and ring 2220 can be void in some embodiments
Quasi- (time division multiplexing) TDM ring.When shared hardware, ring object 2210 and stream 2215 can be physical object and stream or TDM ring pair
As any combination with stream.As depicted, a ring object 2210 can connect between node in subject router ring 2220
And it is connected to the stream 2225 towards object memories structure router.In some embodiments, more than one ring object
2210 can connect between node in subject router ring and in corresponding stream.
As depicted, node 2200 may include PCIe 2230, node memory controller and DD4 memory bus
2235 and object memories structure objects memory 2240.Each object memories structure objects memory 2240 can have
An at least convection current, at least a pair of DD4 memory bus 2235 and PCIe 2230 connection flowed by being run with hardware performance
To subject router ring object 2210 between node.As depicted, there may be operate on any processor core 2250
Software object 2245 may be used as any combination of route agent and/or object memories.2245 company of can have of software object
The stream of the ring object 2210 between node in subject router 2205 is connect.Therefore, such 2245 stream of software object can flow through
PCIe 2230。
Figure 22 B is the soft of object memories on the node 2200-1 illustrated according to disclosure some embodiments and router
The block diagram of part simulation example.Software object 2245 can such as simulated object memory construction object memories 2240.With reality
In the same manner, software object 2245 may include identical data structure to track pair to object memories structure objects memory 2240
As and block, and the request of the subject router 2205 from node is responded.Software object 2245-1 passes through simulation section
2205 function of subject router between point, can be for example corresponding to route agent.It does so, software object 2245-1 can pass through mark
Quasi- wired and or wireless network, will be streamed to for example move, wired and/or Internet of Things (IoT) equipment 2255.
In some embodiments, subject router function can operate in one or more processing cores between entire node
It is realized in one or more software objects 2245 on 2250, unique difference is performance.Moreover, as described above, one or
Multiple processing cores 2250 can also be directly accessed object memories structure objects memory according to conventional memory reference.
Figure 23 is shown in the object structure node subject router 2300 according to some embodiments of the disclosure
The exemplary block diagram of stream.Object storage organization router 2300 may include the ring object 2305 connecting with stream 2310.As retouch
It draws, ring object 2305 can be with ring topology by 2310 connection of stream.Ring object 2305 and stream 2310 can be physics or TDM
Any combination.One or more ring objects 2305 can connect to the physical streams 2315 towards leaf node.As depicted, one
Ring object 2305 can connect to the physical streams 2320 towards root node.In some embodiments, more than one ring object
2305 can connect to the respective physical streams 2320 towards root node.
The backstage API
Although it is how interface (interface) arrives that it be about software that the API for representing Application Programming Interface, which is sounded,
Object storage organization, but in some embodiments, the main interface of object storage organization can correspond to memory.Some
In embodiment, object memories structure API can correspond to how pellucidly to establish and safeguard object memories structure for application
(for example, passing through the Linux libc function library of modification).The application of such as SQL database or graphic data base can use API
Object memories structure objects are created, and provide/expand metadata so that object memories structure can be managed preferably
Object.
In various embodiments, the allomeric function of API may include:
1. creating object and maintenance object in object memories structure;
2. object memories structure objects are associated with LVA local virtual address and physical address;
3. providing and expanding metadata, so that object memories structure can preferably manage object;And/or
4. specified extended instruction function and method.
It is functional to realize that api function can use the last one function.It, can be with by the way that function and method can be created
Entire native processor sequence is unloaded to object storage organization, is such as referred to above with respect to extended instruction environment and extension with obtaining
Enable disclosed efficiency.
Api interface can be indexed by the server object based on PCle, the server object index based on PCle
Also referred to as subject router between object memories structure node.API programming model can directly and application integration.It can provide
Multithreading (passes through memory command queue), so that each application logically issues order.Each order can be provided and be returned
Return state and optional data.Api command can be used as a part of trigger.
Such as about " memory construction distributed objects memory and index " (for example, about Figure 10-described herein
12) described, introduce data structure and operation that three components come description object memory and index.These three components such as following table
Shown in 127.This section will discuss Physical Examples deeper into ground.
Due to all shared general utility functions about object memories and index of all three constituent elements, bottom-layer design
Object can be reused (a kind of universal design) in this three.
Figure 24 is the block diagram for showing the product line hardware implementing architecture according to some embodiments of the disclosure.
In server, the DDR4 memory bank of standard is can be inserted in memory module or DIMM.Each memory mould
Block/DIMM can independently manage dram memory (quickly in such a way that processor is thought there are the quick dram of amount of flash memory
With it is relatively expensive) and flash memory (be not so fast, but considerably cheaper) (see, e.g. herein, " object memories are cached
Part ").Each processor slot may have 8 memory banks or dual slot server that may have 16 memories to insert
Slot.Node router or " uRouter " can by PCle and memory bus, using direct memory access (DMA) come with memory
Module/DIMM is communicated.Memory construction can retain each memory module/DIMM physical storage mapping one
Point, to realize to node router/uRouter based on PCIe and from node router/uRouter communication of PCIe.Cause
This, the combination of PCle, memory bus and memory module/DIMM memory memory construction private part can form void
Quasi- high-bandwidth link.This can all be transparent for application execution.
Node router/uRouter can by using multiple layers of gigabit Ethernet agreement 25/100GE optical fiber and section
Router or " IMF-Router " are attached between point.Router can use same 25/100GE optical fiber and be connected between node
It connects.Router can provide 16 downlinks and 2 uplinks towards root between node.One embodiment can use specially
The link of door.Another embodiment can interoperate with standard link and router.
Figure 25 is to show the block diagram that framework is realized according to the substitute products series hardware of some embodiments of the disclosure.It should
Embodiment can provide additional memory triggering command collection and extension object method and execute resource.Needed for this makes it possible to reduce
Server quantity because more database purchase body manager and engine can be executed in object memories without
Want server processor resources.The memory construction node of serverless backup, which may include 16, has node router/uRouter
Object memories.10 nodes can be encapsulated into single 1U rack installation cabinet, provide 16 times of space reduce and
Performance boost is up to 5 times.
Server node
Server may include individual node router/uRouterr and one or more memory module/DIMM.Node
Object indexing may be implemented in router, which covers (or more to the object memories being stored in same server
A object memories) all object/blocks in (memory module).Memory module can save in practical object and object
It block, corresponding object metadata and covers the object indexing of object is currently locally stored.Each memory module can be with
Processor think the mode of the quick dram there are amount of flash memory independently manage dram memory (it can be, for example, quickly and
It is relatively expensive) and flash memory (it can be, for example, be not it is so fast, but considerably cheaper).Memory module and node-routing
Device can manage free memory bank by free bank index, this can be by indexing identical mode with other come real
It is existing.
Figure 26 is the memory construction server view for illustrating the hardware implementing architecture of some embodiments according to the disclosure
Block diagram.
As described herein, object can be created and safeguarded by memory construction API.API can depositing by libc
Reservoir structures version and memory construction driver are communicated with node router/uRouter.Then, node router
Native object index can be updated, sends order towards root as needed, and communicated with memory module/DIMM appropriate
To complete api command (such as with local mode).Memory module, which can will manage, requests send back node router, and node
Router can suitably handle management request about both memory construction and local Linux.Node router and storage
Device module may participate in mobile object and block (such as in a manner of described in referring to Fig.1 2 " object memories miss ").
Memory module/RDIMM
RDIMM may include dram (such as 32 gigabytes (Gbyte)), flash memory (such as 4 terabytes (Tbyte)), with
And FPGA and DDR4 compatibility buffer (first generation product capacity of each memory module).FPGA may include all money
Source, structure and internal data structure, using dram and flash memory as the object memories being incorporated into memory construction entirety into
Row management.
Figure 27 is the block diagram for illustrating the memory module view of the hardware implementing architecture of some embodiments according to the disclosure.
Single expansible and parameterisable framework can be used realize the memory construction on memory module/DIMM with
And router/the IMF-Router of node router/between uRouter and node.
Inside structure can carry out tissue with surrounding high performance, expansible ring interconnect, which may be implemented to store
The local version of device structural integrity agreement.Each subsystem can be by coherent caching come connection ring.It is stored
The type of metadata, data and object can depend on the function of subsystem.Routing engine in whole three subsystems can be with
It synthesized from universal design, can be height multithreading, and thread or state that can be not long-term.Routing engine shows
Example set can be as follows:
1.Dram routing engine (stream engine (StreamEngine)):Memory module/DDR4 access is controlled, monitoring is used for
Processor accesses the trigger of data, and includes DDR4 cache.DDR4 operation can be monitored for trigger by flowing engine, and
DDR4 cache accessing is verified by mapping the internal form in 0.5Tbyte physics DIMM location space.The table has several
Possible realization, including:
A. association completely:Each page physics can be numbered to (not including low 12 bit address) and be converted to the page in DDR4
The table of offset.This has the advantage that can cache any one group of page.
B. partial association:In addition to position of the RAS to incidence set is addressed and gives what StreamEngine was converted
It is identical as correlation technology except time.16-32 kind association rank, very close complete associated performance may be implemented in this way.This
Kind technology needs the table of about 128k x 4 (512k).
2. memory construction backstage and API engine (enforcement engine (ExecuteEngine)):Core memory can be provided
Structure algorithm, such as consistency, trigger, memory construction API, with depositing for accelerated graphics and other big datas and higher level
Reservoir structures instruction sequence.The API and memory construction trigger that higher level can be provided are executed.Also it can handle backstage to tie up
Shield.
3.OIT/POIT engine:This service is simultaneously supplied to other engines by management OIT/POIT.The engine can be followed at 2
Rank in in-loop processing one index, provides high-performance indexed search and management.Manage object, meta data block, data block and
The flash memory of index.
Figure 28 is the block diagram for illustrating the memory module view of the hardware implementing architecture of some embodiments according to the disclosure.
According to this embodiment, the ability on multithreading memory construction backstage and API engine can be increased, functionally to hold
The various memory construction trigger instructions of row.Multithreading memory construction backstage and the API engine of update can be added
Additional examples to obtain higher memory construction trigger program feature.Function addition and the combination of more examples can be with purports
Big data and data manager software are executed using less server enabling memory construction, such as the institute in Figure 28
Show.
Node router
Node router/uRouter inside structure can be identical as memory module/DIMM, and difference is node road
By the functionality of device, management memory construction server object index, and to/from road between PCIe (memory module) and node
Package appropriate is routed by device/IMF-Router.It can have an additional routing function, and can actually not storage object.
As described above, the example collection of routing engine can be as follows:
Figure 29 is the block diagram for showing the node router view of the hardware implementing architecture of some embodiments according to the disclosure.
1. routing engine:Routing to/from the package of router between PCIe (memory module) and node is controlled.
In general, by a paths enter package will a paths exit in inter-process and wherein.
2.OIT/POIT engine (ObjMemEngine):This service is simultaneously supplied to other engines by management OIT/POIT.Draw
The rank that can be followed at 2 in in-loop processing one index is held up, to provide high-performance indexed search and management.Management is for indexing
Flash memory and HMC (mixing memory cube) storage.Most common index is cached in HMC.
3. memory construction backstage and API engine:The API and memory construction trigger for providing higher level are executed.?
Handle background maintenance.
Router between node
Figure 30 is the frame of router view between the node for showing the hardware implementing architecture of some embodiments according to the disclosure
Figure.
Router can be similar to ip router between node.It, which is distinguished, can be that addressing model and static state vs. are dynamic
State.Ip router can be to each node using fixed static address, and fixed physics is routed to based on destination IP address
Node (can virtually turn to medium and long-time frame).Router, which can use, between node specifies the specific piece of object and object
Memory construction object address (OA).Object and block can dynamically reside on any node.Router can be between node
Route OA package based on the dynamic position (or multiple dynamic positions) of object and block, and can real-time dynamicly tracking object/
Block position.
Router can be the extended version of node router between node.It can connect it is multiple (such as 12-16, but
It is to be expected to 16) router between router and two uplink Nodes between downlink node router or node, and
Single PCIe bus is not connected to connect leaf memory module.Object indexing memory capacity, processing speed and whole routing
Bandwidth can also be extended.
Figure 31 is the memory construction router view for showing the hardware implementing architecture of some embodiments according to the disclosure
Block diagram.Memory construction framework can be routed for each downlink or uplink of its connection using memory construction
Device.Memory construction router can be nearly identical to node router (for example, in addition to supporting internal memory structure ring (its
May be identical as on piece version) and delete PCle).Memory construction ring can use between memory construction router because
Te Laken (Interlaken) agreement.The Interlaken agreement of package rank can be used for downlink with by 10G and 100G Ethernet
Link is mutually compatible with uplink.Each memory construction router can have object indexing identical with node router and deposit
Capacity, processing speed and routing bandwidth are stored up, to allow the extension of router between node to support downlink and uplink
Quantity.
It is all right from its downlink that the object indexing of each downlink stored device structure router can reflect
As or block.So even router between node, also can be used distributed internal object index and routing.
Router can be identical for leaf between the node of any rank.Due to that can be deposited in each rank
More data are stored up, the larger polymerization hierarchical object memory (cache) of each rank from leaf can be tended to reduce
Data between rank are mobile.The data that height uses can store in mutiple positions.
It is realized with standard software
Object-based storage device structure described above can provide local function (native function), these
Local function can substitute virtual memory, the part in memory in file system and database purchase body manager, and with
Very effective format stores their own data.Figure 32 be show it is soft according to can substituting for one embodiment of the disclosure
The block diagram of the object memories structure function of part function.As detailed above, these object-based storage device structure letters
Number may include function 3205 and function 3210, in function 3205 is used to deal with objects in memory by object address space
Block, function 3210 by the LVA local virtual address space of object address and node for being dealt with objects.It establishes in software letter
On number 3205 and 3210, object-based storage device structure can also provide file process function 3215, memory in memory
Function 3225 in middle database functions 3220 and other memories.As described above, function 3215,3220 and in these memories
Each of 3225 can pass through the virtual address in object address space and each node of object-based storage device structure
Space operates the memory object in object-based storage device structure.Small more to the progress of memory bank manager
In the case where changing, object-based storage device structure and the function thus provided can be transparent for terminal user's application.
Although very little, these changes can be by, with Format Object storing data in memory, coming in the unlimited address space of object
Greatly improve efficiency.Efficiency raising is dual:1) in the memory of bottom Format Object and;2) it eliminates from memory bank and various
The conversion that database and/or application format carry out.
As described above, the embodiment provides the interface for arriving object-based storage device structure, which can be with
It realizes under the application layer in software stack.In this way, between object-based storage device and normal address space
Difference it is transparent for that should be used to say that, which can use object-based storage device without modifying, and have
The function and performance advantage of object-based storage device.On the contrary, modified memory bank manager can be by system software (such as
Standard operation system, such as Linux) interface is to object-based storage device.These modified memory bank managers can mention
For the management to standard processor hardware (such as buffer and cache), object-based storage device space can control
Available to the processor relatively narrow physical address space in part as it can be seen that and can be can by application by the system software of standard into
Line access.In this way, using can be by system software (for example, passing through the operating system memory distribution of standard
Process) object-based storage device structure is accessed and using without modifying.
Figure 33 is the block diagram for illustrating the object memories infrastructure software storehouse according to one embodiment of the disclosure.As this shows
Shown in example, storehouse 3300 is as detailed above to be started with object-based storage device structure 3305 and is implemented in it
On.Memory construction operating system driver 3310 can be provided pair by the memory distribution function of the operating system of node
The access in the object-based storage device space of object-based storage device structure 3305.In some cases, operating system can
To include Linux or security-enhanced Linux (SELinux).Memory construction operating system driver 3310 can also be to behaviour
The one or more virtual machines for making system provide hook (hook).
In one embodiment, storehouse 3300 can also include that the object-based of library file 3315 of operating system is deposited
The particular version of reservoir.For example, the library file 3315 may include the object-based storage device structure of standard c language library libc
Particular version.The library file 3315 can be suitable for object-based storage device and obtain to utilize object-based storage device
The mode of the advantage of structure handles memory distribution and file system api.In addition, the library file 3135 and function therein
It is transparent using can be for application program and user, that is, they do not need to be considered as different with corresponding standard library function.
Storehouse 3300 may further include one group of memory bank manager 3325,3330,3335,3340 and 3345.Generally
For, memory bank manager 3325,3330,3335,3340 and 3345 may include the memory bank manager of one group of modification, fit
With to utilize the format in object-based storage device space and addressing.3325,3330,3335,3340 and of memory bank manager
3345 can provide object-based storage device space and by processor execute operating system between interface, and provide pair
Using the interface layer file system, database or other software it is transparent, substitution the memory bank based on object memories.Storage
Body manager 3325,3330,3335,3340 and 3345 can include but is not limited to:Graphic data base memory bank manager 3325,
SQL or other relational database memory bank managers 3330, file system memory bank manager 3335 and/or it is one or more its
His different types of memory bank manager 3340.
According to one embodiment, be directly accessed interface 3320 allow in direct memory memory bank manager 3334 by pair
As 3315 interfaces of memory construction library file, to be directly accessed object memories structure 3305.Due to memory construction 3305
Object is managed in a manner of complete and is consistent, which can be with direct access storage structure 3305.
Being directly accessed both interface 3320 and direct memory manager 3345 is all consistently to be managed by memory construction 3305 pair
The ability of elephant is realized.These give modified application direct interface to memory construction class libraries 3315 or direct interfaces to depositing
The path of reservoir structures 3305.
The object-based storage device structure addition of software stack 3300 is located under application layer, is not repaired with providing one group
Compatibility between the application 3350,3355,3360 and 3365 changed and object-based storage device structure 3305.Such application
It may include but be not limited to, one or more standard graphical data libraries are using 3350, one or more stsndard SQLs or other passes
It is type database application 3355, one or more standard file system access applications 3360, and/or one or more other standards
Unmodified apply 3365.Object-based storage device structure addition to software stack 3300 includes memory construction operation system
System driver 3310, the specific library file 3315 of object-based storage device, 3325,3330,3335,3340 and of memory bank manager
3345, and can thus provide using between 3350,3355,3360 and 3365 and object-based storage device structure 3305
Interface.The interface layer can control the virtual address space and physics of the part for processor in object-based storage device space
Address space is visible, that is, page fault and page processing routine (handler), the page processing routine control object address are empty
Between which part be currently visible in the physical address space of each node and coordinate memory object with using journey
Relationship between sequence section and file.According to one embodiment, list can be controlled by object-based storage device structure access
(ACL) or equivalent, the Object Access permission of each application 3350,3355,3360 and 3365 is determined.
In other words, the hardware based processing node of each of object memories structure 3305 is (such as detailed above
) it may include the memory mould that one or more memory objects are stored and managed in object-based storage device space
Block.Furthermore as described above, each memory object can locally create in memory module, referred to using single memory reference
It enables and does not have to input/output (I/O) instruction and accessed, and be managed by memory module in single memory layer.It deposits
Memory modules can be provided under the application layer 3350,3355,3360 and 3365 of software stack 3300 interface layer 3310,
3315,3320,3325,3330,3335,3340 and 3345.Interface layer may include one or more memory bank managers 3325,
3330,3335,3340 and 3345, the hardware of these memory bank manager administration processors simultaneously controls object-based storage device sky
Between part it is hardware based for each of object-based storage device structure 3305 processing node processor virtually
Location space and physical address space are visible.One or more memory bank managers 3325,3330,3335,3340 and 3345 can be with
The operation system for further providing for object-based storage device space and being executed by the processor of each hardware based processing node
Interface between system, and the memory bank based on object memories of substitution is provided, the depositing based on object memories of the substitution
Store up body to the application layer 3350,3355,3360 and 3365 of software stack 3300, using interface layer 3310,3315,3320,
3325,3330,3335,3340 and 3345 file system, database or other software are transparent.In some cases, operation system
System may include Linux or security-enhanced Linux (SELinux).It can be from any node equivalent with memory construction
The memory object that ground creation and management are created and managed by memory construction.Therefore, multinode memory construction does not need to collect
The memory bank manager or memory construction class libraries of Chinese style.
Interface layer 3310,3315,3320,3325,3330,3335,3340 and 3345 can pass through the storage of operating system
Device partition function is provided to the one or more application executed in the application layer of software stack access to object-based storage
The access in device space.In one embodiment, interface layer may include the object-based of the library file 3315 of operating system
The particular version of memory.One or more memory bank managers 3325,3330,3335,3340 and 3345, which can use, to be based on
The format of the memory space of object and addressing.One or more memory bank managers may include such as database manager
3330, graph data librarian 3325 and/or file system manager 3335.
The disclosure in all fields, embodiment and/or configuration in include substantially as depicted herein and description component, side
Method, processing, system and/or device, including various aspects, embodiment, configuration embodiment, sub-portfolio and/or its subset.It is resonable
After solving the disclosure, it will be appreciated by those skilled in the art that how to realize and use disclosed aspect, embodiment and/or configuration.
In all fields, embodiment and/or configuration in, the disclosure includes the case where lacking the project do not described and/or described herein
Lower or offer device and processing in various aspects of the invention, embodiment and/or configuration, including lacking may be previous
These projects used in equipment or processing, for example, for improving performance, easy to accomplish and/or reduction implementation cost.
It has presented for the purpose of illustration and description discussed above.Foregoing teachings are not intended to for the disclosure being limited to
One or more forms disclosed herein.In foregoing detailed description, in order to simplify the purpose of the disclosure, the disclosure it is various
Feature is for example grouped together in one or more aspects, embodiment and/or configuration.The aspect of the disclosure, embodiment and/
Or the feature of configuration can be combined in the alternative aspect other than discussed in the above, embodiment and/or configuration.The disclosure
Method is not necessarily to be construed as reflecting the intention that claim needs the more features than being expressly recited in each claim.And
Be it is as reflected in the following claims, inventive aspect is than single aforementioned disclosed aspect, embodiment and/or the institute of configuration
There is feature to lack.Therefore, following claims are incorporated into herein in the detailed description, wherein each claim itself is as this
Individual preferred embodiment is disclosed.
In addition, although description included to one or more aspects, embodiment and/or configuration and certain modifications and
The description of modification, but other modifications, combination and modification are also within the scope of this disclosure, for example, after understanding the disclosure
Those skilled in the art skills and knowledge in the range of.The purpose is to obtain in allowed limits including alternative aspect,
Embodiment and/or the right of configuration, including substitution, interchangeable and/or equivalent structure, function, range or step, no matter
These substitutions, interchangeable and/or equivalent structure, function, range or step whether are disclosed herein, and are not intended to public affairs
Open the theme for being exclusively used in any patentability.
Claims (42)
1. a kind of hardware based processing node of object memories structure, the processing node include:
Memory module stores in object-based storage device space and manages one or more memory objects, wherein:
Each memory object is locallyd create in the memory module,
Each memory object is not have to input/output (I/O) instruction accessing using single memory reference instruction,
Each memory object is by the memory module in single memory layer-management, and
The memory module provides interface layer under the application layer of software stack, and the interface layer is deposited including one or more
Body manager is stored up, the hardware of one or more of memory bank manager administration processors simultaneously controls the object-based storage
The part in device space is visible to the virtual address space and physical address space of the processor.
2. hardware based processing node as described in claim 1, wherein one or more of memory bank managers are into one
The interface that step provides object-based storage device space between the operating system that is executed by the processor.
3. hardware based processing node as claimed in claim 2, wherein one or more of memory bank managers provide
The memory bank based on object memories of substitution, the substitution based on the memory bank of object memories to using the interface layer
File system, database or other software be transparent.
4. hardware based processing node as claimed in claim 2, wherein the operating system includes Linux or safety enhancing
Type Linux (SELinux).
5. hardware based processing node as claimed in claim 2, wherein the interface layer is in the software stack access
Application layer in the one or more application that executes the access to object-based storage device space is provided.
6. hardware based processing node as claimed in claim 5, wherein the interface layer depositing by the operating system
Reservoir partition function provides the access to object-based storage device space.
7. hardware based processing node as claimed in claim 2, wherein the interface layer includes the library of the operating system
The particular version of the object-based storage device of file.
8. hardware based processing node as described in claim 1, wherein one or more of memory bank managers utilize
The format in object-based storage device space and addressing.
9. hardware based processing node as described in claim 1, wherein one or more of memory bank managers include
At least one database manager.
10. hardware based processing node as described in claim 1, wherein one or more of memory bank managers include
At least one graph data librarian.
11. hardware based processing node as described in claim 1, wherein one or more of memory bank managers include
At least one file system manager.
12. hardware based processing node as described in claim 1, wherein one or more of memory bank managers include
At least one direct memory bank manager, the direct memory bank manager provide directly depositing for the memory construction to application
It takes, the application is modified to the particular version of the object-based storage device structure of the library file using the operating system.
13. hardware based processing node as described in claim 1, wherein the hardware based processing node includes biserial
Straight cutting memory module (DIMM) card.
14. hardware based processing node as described in claim 1, wherein the hardware based processing node includes commodity
Server, and wherein the memory module includes the dual-in-line memory module being mounted in the commodity server
(DIMM) block.
15. hardware based processing node as described in claim 1, wherein the hardware based processing node includes movement
Calculate equipment.
16. hardware based processing node as described in claim 1, wherein the hardware based processing node includes single
Chip.
17. a kind of object memories structure, including:
Multiple hardware based processing nodes, each hardware based processing node include:
Memory module stores in object-based storage device space and manages one or more memory objects, wherein often
A memory object is locallyd create in the memory module, and each memory object is quoted using single memory
It instructs and does not have to input/output (I/O) instruction accessing, each memory object is individually being stored by the memory module
Device layer-management, and wherein the memory module provides interface layer, the interface layer under the application layer of software stack
Including one or more memory bank managers, the hardware of one or more of memory bank manager administration processors simultaneously controls institute
The part for stating object-based storage device space is visible to the virtual address space and physical address space of the processor;
Node router, with each of one or more memory modules of node memory module communicatedly coupling
It closes, and is adapted between one or more memory modules of node route memory object or memory object
Part;And
Router between one or more nodes is communicatively coupled, wherein the object memories structure with each node router
Multiple nodes in each node coupled with router communication between at least one node in router between the node, and
And it is adapted for the part of route memory object or memory object between the multiple node.
18. object memories structure as claimed in claim 17, wherein one or more of memory bank managers are further
The interface that object-based storage device space is provided between the operating system that is executed by the processor.
19. object memories structure as claimed in claim 18, wherein the offer of one or more of memory bank managers is replaced
The memory bank based on object memories in generation, the substitution based on the memory bank of object memories to using the interface layer
File system, database or other software are transparent.
20. object memories structure as claimed in claim 18, wherein the operating system includes Linux or security-enhanced
Linux(SELinux)。
21. object memories structure as claimed in claim 18, wherein the interface layer is in the software stack access
The one or more application executed in application layer provides the access to object-based storage device space.
22. object memories structure as claimed in claim 21, wherein storage of the interface layer by the operating system
Device partition function provides the access to object-based storage device space.
23. object memories structure as claimed in claim 18, wherein the interface layer includes the library text of the operating system
The particular version of the object-based storage device of part.
24. object memories structure as claimed in claim 17, wherein one or more of memory bank managers utilize institute
State format and the addressing in object-based storage device space.
25. object memories structure as claimed in claim 17, wherein one or more of memory bank managers include extremely
A few database manager.
26. object memories structure as claimed in claim 17, wherein one or more of memory bank managers include extremely
A few graph data librarian.
27. object memories structure as claimed in claim 17, wherein one or more of memory bank managers include extremely
A few file system manager.
28. object memories structure as claimed in claim 17, wherein one or more of memory bank managers include extremely
A few direct memory bank manager, the direct memory bank manager provide directly depositing for the memory construction to application
It takes, the application is modified to the particular version of the object-based storage device structure of the library file using the operating system.
29. object memories structure as claimed in claim 17, wherein from any based on hard of the object memories structure
Memory object is created and managed to the processing node equivalent of part, and does not have to the memory bank manager or memory construction of centralization
Class libraries.
30. object memories structure as claimed in claim 17, the hardware based processing node of wherein at least one includes double
Column straight cutting memory module (DIMM) card.
31. object memories structure as claimed in claim 17, the hardware based processing node of wherein at least one includes quotient
Product server, and wherein the memory module includes the dual-in-line memory module being mounted in the commodity server
(DIMM) block.
32. object memories structure as claimed in claim 17, the hardware based processing node of wherein at least one includes moving
It is dynamic to calculate equipment.
33. object memories structure as claimed in claim 17, the hardware based processing node of wherein at least one includes single
A chip.
34. a kind of for by object-based storage device structure and in the one or more of the object-based storage device structure
The method of the software interfaces executed on node, the method includes:
By the hardware based processing node of the object-based storage device structure, in the hardware based processing node
Each memory object is locallyd create in memory module,
By the hardware based processing node, input/output (I/O) instruction is not had to using single memory reference instruction and is deposited
Each memory object is taken,
Each memory pair by the hardware based processing node, in the memory module described in single memory layer-management
As, and
By the hardware based processing node, interface layer is provided under the application layer of software stack, the interface layer includes
One or more memory bank managers, the hardware of one or more of memory bank manager administration processors simultaneously control the base
It is visible to the virtual address space and physical address space of the processor in the part of the storage space of object.
35. method as claimed in claim 34, further comprise passed through by the hardware based processing node it is one
Or multiple memory bank managers, provide object-based storage device space and the operating system that is executed by the processor it
Between interface.
36. method as claimed in claim 35, further comprise passed through by the hardware based processing node it is one
Or multiple memory bank managers, the memory bank based on object memories of substitution is provided, the substitution based on object memories
Memory bank be transparent to the file system, database or the other software that use the interface layer.
37. method as claimed in claim 35, wherein the operating system includes Linux or security-enhanced Linux
(SELinux)。
38. method as claimed in claim 35 further comprises passing through the interface by the hardware based processing node
Layer, the one or more application executed into the application layer in the software stack access are provided to the object-based storage
The access in device space.
39. method as claimed in claim 38 further comprises passing through the interface by the hardware based processing node
Layer provides the access to object-based storage device space by the memory distribution function of the operating system.
40. method as claimed in claim 35, wherein the interface layer includes the library file of the operating system based on right
The particular version of the memory of elephant.
41. method as claimed in claim 34, wherein one or more of memory bank managers are based on object using described
Storage space format and addressing.
42. method as claimed in claim 34, wherein one or more of memory bank managers include database manager,
At least one of graph data librarian or file system manager.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201562264731P | 2015-12-08 | 2015-12-08 | |
US62/264,731 | 2015-12-08 | ||
PCT/US2016/065320 WO2017100281A1 (en) | 2015-12-08 | 2016-12-07 | Memory fabric software implementation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108885604A true CN108885604A (en) | 2018-11-23 |
CN108885604B CN108885604B (en) | 2022-04-12 |
Family
ID=59013221
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201680080706.0A Active CN108885604B (en) | 2015-12-08 | 2016-12-07 | Memory architecture software implementation |
Country Status (5)
Country | Link |
---|---|
US (2) | US11269514B2 (en) |
EP (1) | EP3387547B1 (en) |
CN (1) | CN108885604B (en) |
CA (1) | CA3006773A1 (en) |
WO (1) | WO2017100281A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109491190A (en) * | 2018-12-10 | 2019-03-19 | 海南大学 | Screen and optical projection system are actively moved in the air |
CN109725856A (en) * | 2018-12-29 | 2019-05-07 | 深圳市网心科技有限公司 | A kind of shared node management method, device, electronic equipment and storage medium |
CN113807531A (en) * | 2020-06-12 | 2021-12-17 | 百度(美国)有限责任公司 | AI model transfer method using address randomization |
CN115994145A (en) * | 2023-02-09 | 2023-04-21 | 中国证券登记结算有限责任公司 | Method and device for processing data |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11755201B2 (en) | 2015-01-20 | 2023-09-12 | Ultrata, Llc | Implementation of an object memory centric cloud |
WO2016118624A1 (en) | 2015-01-20 | 2016-07-28 | Ultrata Llc | Object memory instruction set |
US9971542B2 (en) | 2015-06-09 | 2018-05-15 | Ultrata, Llc | Infinite memory fabric streams and APIs |
US9886210B2 (en) | 2015-06-09 | 2018-02-06 | Ultrata, Llc | Infinite memory fabric hardware implementation with router |
US10698628B2 (en) | 2015-06-09 | 2020-06-30 | Ultrata, Llc | Infinite memory fabric hardware implementation with memory |
WO2017100281A1 (en) * | 2015-12-08 | 2017-06-15 | Ultrata, Llc | Memory fabric software implementation |
WO2017100292A1 (en) | 2015-12-08 | 2017-06-15 | Ultrata, Llc. | Object memory interfaces across shared links |
EP3692446A4 (en) * | 2017-10-05 | 2021-05-05 | Telefonaktiebolaget LM Ericsson (publ) | Method and a migration component for migrating an application |
CN111857731A (en) * | 2020-07-31 | 2020-10-30 | 广州锦行网络科技有限公司 | Flux storage method based on linux platform |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6115790A (en) * | 1997-08-29 | 2000-09-05 | Silicon Graphics, Inc. | System, method and computer program product for organizing page caches |
US20070234290A1 (en) * | 2006-03-31 | 2007-10-04 | Benzi Ronen | Interactive container of development components and solutions |
US20070245111A1 (en) * | 2006-04-18 | 2007-10-18 | International Business Machines Corporation | Methods, systems, and computer program products for managing temporary storage |
US20080189251A1 (en) * | 2006-08-25 | 2008-08-07 | Jeremy Branscome | Processing elements of a hardware accelerated reconfigurable processor for accelerating database operations and queries |
CN101496005A (en) * | 2005-12-29 | 2009-07-29 | 亚马逊科技公司 | Distributed replica storage system with web services interface |
US20130031364A1 (en) * | 2011-07-19 | 2013-01-31 | Gerrity Daniel A | Fine-grained security in federated data sets |
Family Cites Families (213)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4326247A (en) | 1978-09-25 | 1982-04-20 | Motorola, Inc. | Architecture for data processor |
US4736317A (en) | 1985-07-17 | 1988-04-05 | Syracuse University | Microprogram-coupled multiple-microprocessor module with 32-bit byte width formed of 8-bit byte width microprocessors |
US5297279A (en) | 1990-05-30 | 1994-03-22 | Texas Instruments Incorporated | System and method for database management supporting object-oriented programming |
AU5550194A (en) | 1993-09-27 | 1995-04-18 | Giga Operations Corporation | Implementation of a selected instruction set cpu in programmable hardware |
US5581765A (en) | 1994-08-30 | 1996-12-03 | International Business Machines Corporation | System for combining a global object identifier with a local object address in a single object pointer |
US5664207A (en) | 1994-12-16 | 1997-09-02 | Xcellenet, Inc. | Systems and methods for automatically sharing information among remote/mobile nodes |
EP0976034B1 (en) | 1996-01-24 | 2005-10-19 | Sun Microsystems, Inc. | Method and apparatus for stack caching |
US5781906A (en) | 1996-06-06 | 1998-07-14 | International Business Machines Corporation | System and method for construction of a data structure for indexing multidimensional objects |
US5889954A (en) | 1996-12-20 | 1999-03-30 | Ericsson Inc. | Network manager providing advanced interconnection capability |
US5859849A (en) | 1997-05-06 | 1999-01-12 | Motorola Inc. | Modular switch element for shared memory switch fabric |
JP2001515243A (en) | 1997-09-05 | 2001-09-18 | サン・マイクロシステムズ・インコーポレーテッド | A multiprocessing computer system using a cluster protection mechanism. |
US6366876B1 (en) | 1997-09-29 | 2002-04-02 | Sun Microsystems, Inc. | Method and apparatus for assessing compatibility between platforms and applications |
US6804766B1 (en) | 1997-11-12 | 2004-10-12 | Hewlett-Packard Development Company, L.P. | Method for managing pages of a designated memory object according to selected memory management policies |
US5987468A (en) | 1997-12-12 | 1999-11-16 | Hitachi America Ltd. | Structure and method for efficient parallel high-dimensional similarity join |
US6480927B1 (en) | 1997-12-31 | 2002-11-12 | Unisys Corporation | High-performance modular memory system with crossbar connections |
US6560403B1 (en) | 1998-01-30 | 2003-05-06 | Victor Company Of Japan, Ltd. | Signal encoding apparatus, audio data transmitting method, audio data recording method, audio data decoding method and audio disc |
US6230151B1 (en) | 1998-04-16 | 2001-05-08 | International Business Machines Corporation | Parallel classification for data mining in a shared-memory multiprocessor system |
US9361243B2 (en) | 1998-07-31 | 2016-06-07 | Kom Networks Inc. | Method and system for providing restricted access to a storage medium |
WO2000028437A1 (en) | 1998-11-06 | 2000-05-18 | Lumen | Directory protocol based data storage |
US6470436B1 (en) | 1998-12-01 | 2002-10-22 | Fast-Chip, Inc. | Eliminating memory fragmentation and garbage collection from the process of managing dynamically allocated memory |
AU4851400A (en) | 1999-05-14 | 2000-12-18 | Dunti Corporation | Relative hierarchical communication network having distributed routing across modular switches with packetized security codes, parity switching, and priority transmission schemes |
US6470344B1 (en) | 1999-05-29 | 2002-10-22 | Oracle Corporation | Buffering a hierarchical index of multi-dimensional data |
US6587874B1 (en) | 1999-06-29 | 2003-07-01 | Cisco Technology, Inc. | Directory assisted autoinstall of network devices |
US6477620B1 (en) | 1999-12-20 | 2002-11-05 | Unisys Corporation | Cache-level return data by-pass system for a hierarchical memory |
US6421769B1 (en) | 1999-12-30 | 2002-07-16 | Intel Corporation | Efficient memory management for channel drivers in next generation I/O system |
WO2001063486A2 (en) | 2000-02-24 | 2001-08-30 | Findbase, L.L.C. | Method and system for extracting, analyzing, storing, comparing and reporting on data stored in web and/or other network repositories and apparatus to detect, prevent and obfuscate information removal from information servers |
DE60129795T2 (en) | 2000-02-29 | 2008-06-12 | Benjamin D. Tucson Baker | INTELLIGENT CALL PROCESS FOR A DISCUSSION FORUM |
US6651163B1 (en) | 2000-03-08 | 2003-11-18 | Advanced Micro Devices, Inc. | Exception handling with reduced overhead in a multithreaded multiprocessing system |
US6957230B2 (en) | 2000-11-30 | 2005-10-18 | Microsoft Corporation | Dynamically generating multiple hierarchies of inter-object relationships based on object attribute values |
US6941417B1 (en) | 2000-12-15 | 2005-09-06 | Shahram Abdollahi-Alibeik | High-speed low-power CAM-based search engine |
US6647466B2 (en) | 2001-01-25 | 2003-11-11 | Hewlett-Packard Development Company, L.P. | Method and apparatus for adaptively bypassing one or more levels of a cache hierarchy |
AU2002242026A1 (en) | 2001-01-29 | 2002-08-12 | Snap Appliance Inc. | Dynamically distributed file system |
US20040205740A1 (en) | 2001-03-29 | 2004-10-14 | Lavery Daniel M. | Method for collection of memory reference information and memory disambiguation |
EP1367778A1 (en) | 2002-05-31 | 2003-12-03 | Fujitsu Siemens Computers, LLC | Networked computer system and method using dual bi-directional communication rings |
JP3851228B2 (en) | 2002-06-14 | 2006-11-29 | 松下電器産業株式会社 | Processor, program conversion apparatus, program conversion method, and computer program |
US8612404B2 (en) | 2002-07-30 | 2013-12-17 | Stored Iq, Inc. | Harvesting file system metsdata |
US20040133590A1 (en) | 2002-08-08 | 2004-07-08 | Henderson Alex E. | Tree data structure with range-specifying keys and associated methods and apparatuses |
US7178132B2 (en) | 2002-10-23 | 2007-02-13 | Microsoft Corporation | Forward walking through binary code to determine offsets for stack walking |
US20080008202A1 (en) | 2002-10-31 | 2008-01-10 | Terrell William C | Router with routing processors and methods for virtualization |
US7457822B1 (en) | 2002-11-01 | 2008-11-25 | Bluearc Uk Limited | Apparatus and method for hardware-based file system |
CN1879091A (en) | 2002-11-07 | 2006-12-13 | 皇家飞利浦电子股份有限公司 | Method and device for persistent-memory management |
KR100918733B1 (en) | 2003-01-30 | 2009-09-24 | 삼성전자주식회사 | Distributed router and method for dynamically managing forwarding information |
JP2004280752A (en) | 2003-03-19 | 2004-10-07 | Sony Corp | Date storage device, management information updating method for data storage device, and computer program |
US7587422B2 (en) | 2003-04-24 | 2009-09-08 | Neopath Networks, Inc. | Transparent file replication using namespace replication |
US20050004924A1 (en) | 2003-04-29 | 2005-01-06 | Adrian Baldwin | Control of access to databases |
US7512638B2 (en) | 2003-08-21 | 2009-03-31 | Microsoft Corporation | Systems and methods for providing conflict handling for peer-to-peer synchronization of units of information manageable by a hardware/software interface system |
US7617510B2 (en) | 2003-09-05 | 2009-11-10 | Microsoft Corporation | Media network using set-top boxes as nodes |
US7865485B2 (en) | 2003-09-23 | 2011-01-04 | Emc Corporation | Multi-threaded write interface and methods for increasing the single file read and write throughput of a file server |
US7630282B2 (en) | 2003-09-30 | 2009-12-08 | Victor Company Of Japan, Ltd. | Disk for audio data, reproduction apparatus, and method of recording/reproducing audio data |
US20050102670A1 (en) | 2003-10-21 | 2005-05-12 | Bretl Robert F. | Shared object memory with object management for multiple virtual machines |
US7155444B2 (en) | 2003-10-23 | 2006-12-26 | Microsoft Corporation | Promotion and demotion techniques to facilitate file property management between object systems |
US7149858B1 (en) | 2003-10-31 | 2006-12-12 | Veritas Operating Corporation | Synchronous replication for system and data security |
US7620630B2 (en) | 2003-11-12 | 2009-11-17 | Oliver Lloyd Pty Ltd | Directory system |
US7333993B2 (en) | 2003-11-25 | 2008-02-19 | Network Appliance, Inc. | Adaptive file readahead technique for multiple read streams |
US7188128B1 (en) | 2003-12-12 | 2007-03-06 | Veritas Operating Corporation | File system and methods for performing file create and open operations with efficient storage allocation |
US7657706B2 (en) | 2003-12-18 | 2010-02-02 | Cisco Technology, Inc. | High speed memory and input/output processor subsystem for efficiently allocating and using high-speed memory and slower-speed memory |
KR100600862B1 (en) | 2004-01-30 | 2006-07-14 | 김선권 | Method of collecting and searching for access route of infomation resource on internet and Computer readable medium stored thereon program for implementing the same |
US20050240748A1 (en) | 2004-04-27 | 2005-10-27 | Yoder Michael E | Locality-aware interface for kernal dynamic memory |
US7251663B1 (en) | 2004-04-30 | 2007-07-31 | Network Appliance, Inc. | Method and apparatus for determining if stored memory range overlaps key memory ranges where the memory address space is organized in a tree form and partition elements for storing key memory ranges |
US20050273571A1 (en) * | 2004-06-02 | 2005-12-08 | Lyon Thomas L | Distributed virtual multiprocessor |
US7278122B2 (en) | 2004-06-24 | 2007-10-02 | Ftl Systems, Inc. | Hardware/software design tool and language specification mechanism enabling efficient technology retargeting and optimization |
US8713295B2 (en) | 2004-07-12 | 2014-04-29 | Oracle International Corporation | Fabric-backplane enterprise servers with pluggable I/O sub-system |
US7386566B2 (en) | 2004-07-15 | 2008-06-10 | Microsoft Corporation | External metadata processing |
US8769106B2 (en) | 2004-07-29 | 2014-07-01 | Thomas Sheehan | Universal configurable device gateway |
US7769974B2 (en) | 2004-09-10 | 2010-08-03 | Microsoft Corporation | Increasing data locality of recently accessed resources |
US7350048B1 (en) | 2004-10-28 | 2008-03-25 | Sun Microsystems, Inc. | Memory system topology |
US7467272B2 (en) | 2004-12-16 | 2008-12-16 | International Business Machines Corporation | Write protection of subroutine return addresses |
US7694065B2 (en) | 2004-12-28 | 2010-04-06 | Sap Ag | Distributed cache architecture |
US7539821B2 (en) | 2004-12-28 | 2009-05-26 | Sap Ag | First in first out eviction implementation |
US7315871B2 (en) | 2005-01-19 | 2008-01-01 | International Business Machines Inc. Corporation | Method, system and program product for interning invariant data objects in dynamic space constrained systems |
US20060174089A1 (en) | 2005-02-01 | 2006-08-03 | International Business Machines Corporation | Method and apparatus for embedding wide instruction words in a fixed-length instruction set architecture |
US9104315B2 (en) | 2005-02-04 | 2015-08-11 | Sandisk Technologies Inc. | Systems and methods for a mass data storage system having a file-based interface to a host and a non-file-based interface to secondary storage |
US7689784B2 (en) | 2005-03-18 | 2010-03-30 | Sony Computer Entertainment Inc. | Methods and apparatus for dynamic linking program overlay |
US7287114B2 (en) | 2005-05-10 | 2007-10-23 | Intel Corporation | Simulating multiple virtual channels in switched fabric networks |
US7200023B2 (en) * | 2005-05-12 | 2007-04-03 | International Business Machines Corporation | Dual-edged DIMM to support memory expansion |
US8089795B2 (en) | 2006-02-09 | 2012-01-03 | Google Inc. | Memory module with memory stack and interface with enhanced capabilities |
US9171585B2 (en) | 2005-06-24 | 2015-10-27 | Google Inc. | Configurable memory circuit system and method |
SG162825A1 (en) | 2005-06-24 | 2010-07-29 | Research In Motion Ltd | System and method for managing memory in a mobile device |
US7689602B1 (en) | 2005-07-20 | 2010-03-30 | Bakbone Software, Inc. | Method of creating hierarchical indices for a distributed object system |
CN100367727C (en) | 2005-07-26 | 2008-02-06 | 华中科技大学 | Expandable storage system and control method based on objects |
US7421566B2 (en) | 2005-08-12 | 2008-09-02 | International Business Machines Corporation | Implementing instruction set architectures with non-contiguous register file specifiers |
US20070038984A1 (en) | 2005-08-12 | 2007-02-15 | Gschwind Michael K | Methods for generating code for an architecture encoding an extended register specification |
US7917474B2 (en) | 2005-10-21 | 2011-03-29 | Isilon Systems, Inc. | Systems and methods for accessing and updating distributed data |
US7804769B1 (en) | 2005-12-01 | 2010-09-28 | Juniper Networks, Inc. | Non-stop forwarding in a multi-chassis router |
US7710872B2 (en) | 2005-12-14 | 2010-05-04 | Cisco Technology, Inc. | Technique for enabling traffic engineering on CE-CE paths across a provider network |
US9002795B2 (en) | 2006-01-26 | 2015-04-07 | Seagate Technology Llc | Object-based data storage device |
US7584332B2 (en) | 2006-02-17 | 2009-09-01 | University Of Notre Dame Du Lac | Computer systems with lightweight multi-threaded architectures |
GB0605383D0 (en) | 2006-03-17 | 2006-04-26 | Williams Paul N | Processing system |
US7472249B2 (en) | 2006-06-30 | 2008-12-30 | Sun Microsystems, Inc. | Kernel memory free algorithm |
US8165111B2 (en) | 2006-07-25 | 2012-04-24 | PSIMAST, Inc | Telecommunication and computing platforms with serial packet switched integrated memory access technology |
US7853752B1 (en) | 2006-09-29 | 2010-12-14 | Tilera Corporation | Caching in multicore and multiprocessor architectures |
US7647471B2 (en) | 2006-11-17 | 2010-01-12 | Sun Microsystems, Inc. | Method and system for collective file access using an mmap (memory-mapped file) |
JP2010512584A (en) | 2006-12-06 | 2010-04-22 | フュージョン マルチシステムズ,インク.(ディービイエイ フュージョン−アイオー) | Apparatus, system and method for managing data from a requesting device having an empty data token command |
US8151082B2 (en) | 2007-12-06 | 2012-04-03 | Fusion-Io, Inc. | Apparatus, system, and method for converting a storage request into an append data storage command |
US20080163183A1 (en) | 2006-12-29 | 2008-07-03 | Zhiyuan Li | Methods and apparatus to provide parameterized offloading on multiprocessor architectures |
US20080209406A1 (en) | 2007-02-27 | 2008-08-28 | Novell, Inc. | History-based call stack construction |
US8001539B2 (en) | 2007-02-28 | 2011-08-16 | Jds Uniphase Corporation | Historical data management |
US8706914B2 (en) | 2007-04-23 | 2014-04-22 | David D. Duchesneau | Computing infrastructure |
US20090006831A1 (en) | 2007-06-30 | 2009-01-01 | Wah Yiu Kwong | Methods and apparatuses for configuring add-on hardware to a computing platform |
US7730278B2 (en) | 2007-07-12 | 2010-06-01 | Oracle America, Inc. | Chunk-specific executable code for chunked java object heaps |
US7716449B2 (en) | 2007-07-12 | 2010-05-11 | Oracle America, Inc. | Efficient chunked java object heaps |
US9824006B2 (en) | 2007-08-13 | 2017-11-21 | Digital Kiva, Inc. | Apparatus and system for object-based storage solid-state device |
US7990851B2 (en) | 2007-11-11 | 2011-08-02 | Weed Instrument, Inc. | Method, apparatus and computer program product for redundant ring communication |
US8195912B2 (en) | 2007-12-06 | 2012-06-05 | Fusion-io, Inc | Apparatus, system, and method for efficient mapping of virtual and physical addresses |
US8069311B2 (en) | 2007-12-28 | 2011-11-29 | Intel Corporation | Methods for prefetching data in a memory storage structure |
US8484307B2 (en) | 2008-02-01 | 2013-07-09 | International Business Machines Corporation | Host fabric interface (HFI) to perform global shared memory (GSM) operations |
US8250308B2 (en) | 2008-02-15 | 2012-08-21 | International Business Machines Corporation | Cache coherency protocol with built in avoidance for conflicting responses |
US8018729B2 (en) | 2008-02-19 | 2011-09-13 | Lsi Corporation | Method and housing for memory module including battery backup |
EP2096564B1 (en) | 2008-02-29 | 2018-08-08 | Euroclear SA/NV | Improvements relating to handling and processing of massive numbers of processing instructions in real time |
KR20090096942A (en) | 2008-03-10 | 2009-09-15 | 이필승 | Steel grating |
US8219564B1 (en) | 2008-04-29 | 2012-07-10 | Netapp, Inc. | Two-dimensional indexes for quick multiple attribute search in a catalog system |
US8775373B1 (en) | 2008-05-21 | 2014-07-08 | Translattice, Inc. | Deleting content in a distributed computing environment |
US8775718B2 (en) | 2008-05-23 | 2014-07-08 | Netapp, Inc. | Use of RDMA to access non-volatile solid-state memory in a network storage system |
US7885967B2 (en) | 2008-05-30 | 2011-02-08 | Red Hat, Inc. | Management of large dynamic tables |
US8943271B2 (en) | 2008-06-12 | 2015-01-27 | Microsoft Corporation | Distributed cache arrangement |
US8060692B2 (en) | 2008-06-27 | 2011-11-15 | Intel Corporation | Memory controller using time-staggered lockstep sub-channels with buffered memory |
WO2010002411A1 (en) | 2008-07-03 | 2010-01-07 | Hewlett-Packard Development Company, L.P. | Memory server |
US8412878B2 (en) | 2008-07-14 | 2013-04-02 | Marvell World Trade Ltd. | Combined mobile device and solid state disk with a shared memory architecture |
FR2934447A1 (en) | 2008-07-23 | 2010-01-29 | France Telecom | METHOD OF COMMUNICATING BETWEEN A PLURALITY OF NODES, THE NODES BEING ORGANIZED FOLLOWING A RING |
JP5153539B2 (en) | 2008-09-22 | 2013-02-27 | 株式会社日立製作所 | Memory management method and computer using the method |
US8277645B2 (en) | 2008-12-17 | 2012-10-02 | Jarvis Jr Ernest | Automatic retractable screen system for storm drain inlets |
US8572036B2 (en) | 2008-12-18 | 2013-10-29 | Datalight, Incorporated | Method and apparatus for fault-tolerant memory management |
US8140555B2 (en) | 2009-04-30 | 2012-03-20 | International Business Machines Corporation | Apparatus, system, and method for dynamically defining inductive relationships between objects in a content management system |
US8918623B2 (en) | 2009-08-04 | 2014-12-23 | International Business Machines Corporation | Implementing instruction set architectures with non-contiguous register file specifiers |
US9122579B2 (en) | 2010-01-06 | 2015-09-01 | Intelligent Intellectual Property Holdings 2 Llc | Apparatus, system, and method for a storage layer |
US8799914B1 (en) | 2009-09-21 | 2014-08-05 | Tilera Corporation | Managing shared resource in an operating system by distributing reference to object and setting protection levels |
US20110103391A1 (en) | 2009-10-30 | 2011-05-05 | Smooth-Stone, Inc. C/O Barry Evans | System and method for high-performance, low-power data center interconnect fabric |
US9648102B1 (en) | 2012-12-27 | 2017-05-09 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US8751533B1 (en) | 2009-11-25 | 2014-06-10 | Netapp, Inc. | Method and system for transparently migrating storage objects between nodes in a clustered storage system |
US8832154B1 (en) | 2009-12-08 | 2014-09-09 | Netapp, Inc. | Object location service for network-based content repository |
US8484259B1 (en) | 2009-12-08 | 2013-07-09 | Netapp, Inc. | Metadata subsystem for a distributed object store in a network storage system |
US8949529B2 (en) | 2009-12-30 | 2015-02-03 | International Business Machines Corporation | Customizing function behavior based on cache and scheduling parameters of a memory argument |
US8346934B2 (en) | 2010-01-05 | 2013-01-01 | Hitachi, Ltd. | Method for executing migration between virtual servers and server system used for the same |
US8244978B2 (en) | 2010-02-17 | 2012-08-14 | Advanced Micro Devices, Inc. | IOMMU architected TLB support |
US8402547B2 (en) | 2010-03-14 | 2013-03-19 | Virtual Forge GmbH | Apparatus and method for detecting, prioritizing and fixing security defects and compliance violations in SAP® ABAP™ code |
US9047351B2 (en) | 2010-04-12 | 2015-06-02 | Sandisk Enterprise Ip Llc | Cluster of processing nodes with distributed global flash memory using commodity server technology |
US8589650B2 (en) * | 2010-05-17 | 2013-11-19 | Texas Instruments Incorporated | Dynamically configurable memory system |
US8321487B1 (en) | 2010-06-30 | 2012-11-27 | Emc Corporation | Recovery of directory information |
US9165015B2 (en) | 2010-07-29 | 2015-10-20 | International Business Machines Corporation | Scalable and user friendly file virtualization for hierarchical storage |
US8392368B1 (en) * | 2010-08-27 | 2013-03-05 | Disney Enterprises, Inc. | System and method for distributing and accessing files in a distributed storage system |
US20120102453A1 (en) | 2010-10-21 | 2012-04-26 | Microsoft Corporation | Multi-dimensional objects |
US8650165B2 (en) * | 2010-11-03 | 2014-02-11 | Netapp, Inc. | System and method for managing data policies on application objects |
US8904120B1 (en) | 2010-12-15 | 2014-12-02 | Netapp Inc. | Segmented fingerprint datastore and scaling a fingerprint datastore in de-duplication environments |
US8898119B2 (en) | 2010-12-15 | 2014-11-25 | Netapp, Inc. | Fingerprints datastore and stale fingerprint removal in de-duplication environments |
US9317637B2 (en) | 2011-01-14 | 2016-04-19 | International Business Machines Corporation | Distributed hardware device simulation |
US20130060556A1 (en) | 2011-07-18 | 2013-03-07 | Et International, Inc. | Systems and methods of runtime system function acceleration for cmp design |
US8812450B1 (en) | 2011-04-29 | 2014-08-19 | Netapp, Inc. | Systems and methods for instantaneous cloning |
US20120331243A1 (en) | 2011-06-24 | 2012-12-27 | International Business Machines Corporation | Remote Direct Memory Access ('RDMA') In A Parallel Computer |
US9417823B2 (en) | 2011-07-12 | 2016-08-16 | Violin Memory Inc. | Memory system management |
US8738868B2 (en) | 2011-08-23 | 2014-05-27 | Vmware, Inc. | Cooperative memory resource management for virtualized computing devices |
US8615745B2 (en) | 2011-10-03 | 2013-12-24 | International Business Machines Corporation | Compiling code for an enhanced application binary interface (ABI) with decode time instruction optimization |
US10078593B2 (en) | 2011-10-28 | 2018-09-18 | The Regents Of The University Of California | Multiple-core computer processor for reverse time migration |
US9063939B2 (en) | 2011-11-03 | 2015-06-23 | Zettaset, Inc. | Distributed storage medium management for heterogeneous storage media in high availability clusters |
CN103959253B (en) | 2011-12-01 | 2018-07-17 | 英特尔公司 | Hardware based memory migration and re-synchronization method and system |
US20140081924A1 (en) | 2012-02-09 | 2014-03-20 | Netapp, Inc. | Identification of data objects stored on clustered logical data containers |
US8706847B2 (en) | 2012-02-09 | 2014-04-22 | International Business Machines Corporation | Initiating a collective operation in a parallel computer |
US8844032B2 (en) | 2012-03-02 | 2014-09-23 | Sri International | Method and system for application-based policy monitoring and enforcement on a mobile device |
US9043567B1 (en) | 2012-03-28 | 2015-05-26 | Netapp, Inc. | Methods and systems for replicating an expandable storage volume |
US9069710B1 (en) | 2012-03-28 | 2015-06-30 | Netapp, Inc. | Methods and systems for replicating an expandable storage volume |
US20130318268A1 (en) * | 2012-05-22 | 2013-11-28 | Xockets IP, LLC | Offloading of computation for rack level servers and corresponding methods and systems |
US9449068B2 (en) | 2012-06-13 | 2016-09-20 | Oracle International Corporation | Information retrieval and navigation using a semantic layer and dynamic objects |
US9280788B2 (en) | 2012-06-13 | 2016-03-08 | Oracle International Corporation | Information retrieval and navigation using a semantic layer |
US9134981B2 (en) | 2012-06-22 | 2015-09-15 | Altera Corporation | OpenCL compilation |
US9378071B2 (en) | 2012-06-23 | 2016-06-28 | Pmda Services Pty Ltd | Computing device for state transitions of recursive state machines and a computer-implemented method for the definition, design and deployment of domain recursive state machines for computing devices of that type |
US9111081B2 (en) | 2012-06-26 | 2015-08-18 | International Business Machines Corporation | Remote direct memory access authentication of a device |
US9390055B2 (en) | 2012-07-17 | 2016-07-12 | Coho Data, Inc. | Systems, methods and devices for integrating end-host and network resources in distributed memory |
KR101492603B1 (en) | 2012-07-25 | 2015-02-12 | 모글루(주) | System for creating interactive electronic documents and control method thereof |
WO2014021821A1 (en) | 2012-07-30 | 2014-02-06 | Empire Technology Development Llc | Writing data to solid state drives |
US8768977B2 (en) | 2012-07-31 | 2014-07-01 | Hewlett-Packard Development Company, L.P. | Data management using writeable snapshots in multi-versioned distributed B-trees |
US20150063349A1 (en) | 2012-08-30 | 2015-03-05 | Shahab Ardalan | Programmable switching engine with storage, analytic and processing capabilities |
US8938559B2 (en) | 2012-10-05 | 2015-01-20 | National Instruments Corporation | Isochronous data transfer between memory-mapped domains of a memory-mapped fabric |
US20140137019A1 (en) | 2012-11-14 | 2014-05-15 | Apple Inc. | Object connection |
US9325711B2 (en) | 2012-12-11 | 2016-04-26 | Servmax, Inc. | Apparatus and data processing systems for accessing an object |
US9037898B2 (en) | 2012-12-18 | 2015-05-19 | International Business Machines Corporation | Communication channel failover in a high performance computing (HPC) network |
CN103095687B (en) | 2012-12-19 | 2015-08-26 | 华为技术有限公司 | metadata processing method and device |
CN104937567B (en) | 2013-01-31 | 2019-05-03 | 慧与发展有限责任合伙企业 | For sharing the mapping mechanism of address space greatly |
US9552288B2 (en) | 2013-02-08 | 2017-01-24 | Seagate Technology Llc | Multi-tiered memory with different metadata levels |
US9405688B2 (en) | 2013-03-05 | 2016-08-02 | Intel Corporation | Method, apparatus, system for handling address conflicts in a distributed memory fabric architecture |
WO2014165283A1 (en) | 2013-03-12 | 2014-10-09 | Vulcan Technologies Llc | Methods and systems for aggregating and presenting large data sets |
EP2972885B1 (en) | 2013-03-14 | 2020-04-22 | Intel Corporation | Memory object reference count management with improved scalability |
US9756128B2 (en) | 2013-04-17 | 2017-09-05 | Apeiron Data Systems | Switched direct attached shared storage architecture |
US9195542B2 (en) | 2013-04-29 | 2015-11-24 | Amazon Technologies, Inc. | Selectively persisting application program data from system memory to non-volatile data storage |
US9075557B2 (en) | 2013-05-15 | 2015-07-07 | SanDisk Technologies, Inc. | Virtual channel for data transfers between devices |
US9304896B2 (en) | 2013-08-05 | 2016-04-05 | Iii Holdings 2, Llc | Remote memory ring buffers in a cluster of data processing nodes |
US9825857B2 (en) | 2013-11-05 | 2017-11-21 | Cisco Technology, Inc. | Method for increasing Layer-3 longest prefix match scale |
US9141676B2 (en) | 2013-12-02 | 2015-09-22 | Rakuten Usa, Inc. | Systems and methods of modeling object networks |
US9372752B2 (en) | 2013-12-27 | 2016-06-21 | Intel Corporation | Assisted coherent shared memory |
US10592475B1 (en) | 2013-12-27 | 2020-03-17 | Amazon Technologies, Inc. | Consistent data storage in distributed computing systems |
US9547657B2 (en) | 2014-02-18 | 2017-01-17 | Black Duck Software, Inc. | Methods and systems for efficient comparison of file sets |
US9734063B2 (en) | 2014-02-27 | 2017-08-15 | École Polytechnique Fédérale De Lausanne (Epfl) | Scale-out non-uniform memory access |
US9524302B2 (en) | 2014-03-05 | 2016-12-20 | Scality, S.A. | Distributed consistent database implementation within an object store |
US8868825B1 (en) | 2014-07-02 | 2014-10-21 | Pure Storage, Inc. | Nonrepeating identifiers in an address space of a non-volatile solid-state storage |
KR200475383Y1 (en) | 2014-07-02 | 2014-11-27 | 주식회사 새온누리그린테크 | Grating with function for blocking of garbage and odor |
US9805079B2 (en) | 2014-08-22 | 2017-10-31 | Xcalar, Inc. | Executing constant time relational queries against structured and semi-structured data |
US9513941B2 (en) | 2014-09-17 | 2016-12-06 | International Business Machines Corporation | Codeless generation of APIs |
US9703768B1 (en) | 2014-09-30 | 2017-07-11 | EMC IP Holding Company LLC | Object metadata query |
US9858140B2 (en) | 2014-11-03 | 2018-01-02 | Intel Corporation | Memory corruption detection |
US10049112B2 (en) | 2014-11-10 | 2018-08-14 | Business Objects Software Ltd. | System and method for monitoring of database data |
US9710421B2 (en) | 2014-12-12 | 2017-07-18 | Intel Corporation | Peripheral component interconnect express (PCIe) card having multiple PCIe connectors |
US10025669B2 (en) | 2014-12-23 | 2018-07-17 | Nuvoton Technology Corporation | Maintaining data-set coherency in non-volatile memory across power interruptions |
WO2016118624A1 (en) | 2015-01-20 | 2016-07-28 | Ultrata Llc | Object memory instruction set |
US20160210077A1 (en) | 2015-01-20 | 2016-07-21 | Ultrata Llc | Trans-cloud object based memory |
US11755201B2 (en) | 2015-01-20 | 2023-09-12 | Ultrata, Llc | Implementation of an object memory centric cloud |
US9747108B2 (en) | 2015-03-27 | 2017-08-29 | Intel Corporation | User-level fork and join processors, methods, systems, and instructions |
US9880769B2 (en) | 2015-06-05 | 2018-01-30 | Microsoft Technology Licensing, Llc. | Streaming joins in constrained memory environments |
US9886210B2 (en) | 2015-06-09 | 2018-02-06 | Ultrata, Llc | Infinite memory fabric hardware implementation with router |
US10698628B2 (en) | 2015-06-09 | 2020-06-30 | Ultrata, Llc | Infinite memory fabric hardware implementation with memory |
US9971542B2 (en) | 2015-06-09 | 2018-05-15 | Ultrata, Llc | Infinite memory fabric streams and APIs |
US9881040B2 (en) | 2015-08-20 | 2018-01-30 | Vmware, Inc. | Tracking data of virtual disk snapshots using tree data structures |
US10248337B2 (en) | 2015-12-08 | 2019-04-02 | Ultrata, Llc | Object memory interfaces across shared links |
WO2017100281A1 (en) * | 2015-12-08 | 2017-06-15 | Ultrata, Llc | Memory fabric software implementation |
US10241676B2 (en) * | 2015-12-08 | 2019-03-26 | Ultrata, Llc | Memory fabric software implementation |
WO2017100292A1 (en) | 2015-12-08 | 2017-06-15 | Ultrata, Llc. | Object memory interfaces across shared links |
-
2016
- 2016-12-07 WO PCT/US2016/065320 patent/WO2017100281A1/en active Application Filing
- 2016-12-07 EP EP16873738.5A patent/EP3387547B1/en active Active
- 2016-12-07 CA CA3006773A patent/CA3006773A1/en active Pending
- 2016-12-07 CN CN201680080706.0A patent/CN108885604B/en active Active
-
2019
- 2019-02-07 US US16/269,833 patent/US11269514B2/en active Active
-
2022
- 2022-03-04 US US17/687,148 patent/US11899931B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6115790A (en) * | 1997-08-29 | 2000-09-05 | Silicon Graphics, Inc. | System, method and computer program product for organizing page caches |
CN101496005A (en) * | 2005-12-29 | 2009-07-29 | 亚马逊科技公司 | Distributed replica storage system with web services interface |
US20070234290A1 (en) * | 2006-03-31 | 2007-10-04 | Benzi Ronen | Interactive container of development components and solutions |
US20070245111A1 (en) * | 2006-04-18 | 2007-10-18 | International Business Machines Corporation | Methods, systems, and computer program products for managing temporary storage |
US20080189251A1 (en) * | 2006-08-25 | 2008-08-07 | Jeremy Branscome | Processing elements of a hardware accelerated reconfigurable processor for accelerating database operations and queries |
US20130031364A1 (en) * | 2011-07-19 | 2013-01-31 | Gerrity Daniel A | Fine-grained security in federated data sets |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109491190A (en) * | 2018-12-10 | 2019-03-19 | 海南大学 | Screen and optical projection system are actively moved in the air |
CN109725856A (en) * | 2018-12-29 | 2019-05-07 | 深圳市网心科技有限公司 | A kind of shared node management method, device, electronic equipment and storage medium |
CN113807531A (en) * | 2020-06-12 | 2021-12-17 | 百度(美国)有限责任公司 | AI model transfer method using address randomization |
CN113807531B (en) * | 2020-06-12 | 2023-08-22 | 百度(美国)有限责任公司 | AI Model Transfer Method Using Address Randomization |
CN115994145A (en) * | 2023-02-09 | 2023-04-21 | 中国证券登记结算有限责任公司 | Method and device for processing data |
CN115994145B (en) * | 2023-02-09 | 2023-08-22 | 中国证券登记结算有限责任公司 | Method and device for processing data |
Also Published As
Publication number | Publication date |
---|---|
US20220350486A1 (en) | 2022-11-03 |
CN108885604B (en) | 2022-04-12 |
EP3387547A4 (en) | 2019-08-07 |
EP3387547A1 (en) | 2018-10-17 |
EP3387547B1 (en) | 2023-07-05 |
WO2017100281A1 (en) | 2017-06-15 |
US11899931B2 (en) | 2024-02-13 |
CA3006773A1 (en) | 2017-06-15 |
US11269514B2 (en) | 2022-03-08 |
US20190171361A1 (en) | 2019-06-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108885607A (en) | Use the memory construction operation of fault tolerant object and consistency | |
CN108885604A (en) | Memory construction software implement scheme | |
CN107924371A (en) | Infinite memory constructional hardware implementation with router | |
US11573699B2 (en) | Distributed index for fault tolerant object memory fabric | |
CN108431774A (en) | Infinite memory structure stream and API | |
CN107533457B (en) | Object memory dataflow instruction execution | |
WO2016200655A1 (en) | Infinite memory fabric hardware implementation with memory | |
CN107533517A (en) | Object-based storage device structure |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |