CN104158875A - Method and system for sharing and reducing tasks of data center server - Google Patents

Method and system for sharing and reducing tasks of data center server Download PDF

Info

Publication number
CN104158875A
CN104158875A CN201410394960.0A CN201410394960A CN104158875A CN 104158875 A CN104158875 A CN 104158875A CN 201410394960 A CN201410394960 A CN 201410394960A CN 104158875 A CN104158875 A CN 104158875A
Authority
CN
China
Prior art keywords
data
processor
center server
data center
storage networking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410394960.0A
Other languages
Chinese (zh)
Other versions
CN104158875B (en
Inventor
景蔚亮
陈邦明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HUNAN QINHAI DIGITAL CO Ltd
Original Assignee
Shanghai Xinchu Integrated Circuit Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xinchu Integrated Circuit Co Ltd filed Critical Shanghai Xinchu Integrated Circuit Co Ltd
Priority to CN201410394960.0A priority Critical patent/CN104158875B/en
Publication of CN104158875A publication Critical patent/CN104158875A/en
Application granted granted Critical
Publication of CN104158875B publication Critical patent/CN104158875B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention provides a method and a system for sharing and reducing the tasks of a data center server. According to the present invention, a processor in a storage network can share some simpler tasks of a processor in the server, and the processor in the server can execute other complicated tasks, thereby reducing the power consumption substantially, saving the cost and improving the performance of the system.

Description

A kind of method and system that alleviate data center server task of sharing
Technical field
The present invention relates to a kind of method that alleviates data center server task of sharing, especially relate to a kind of microprocessor based in storage networking and share the method and system that alleviate data center server task.
Background technology
Along with the arriving in cloud epoch, large data (big data) word is mentioned more and more, and people describe and define the mass data that the information explosion epoch produce with it, and name associated technical development and innovation.Data are expanding rapidly and are becoming large, it is determining the future development of enterprise, although a lot of enterprises may not recognize that data explosion increases the hidden danger of bringing problem, As time goes on, people will more and more recognize the importance of data to enterprise.As alleged in < < New York Times > > one piece of special column in February, 2012, " large data " epoch come, in business, economy and other field, decision-making will be day by day made based on data and analysis, and not based on experience and intuition.Normal and the cloud computing of large data analysis is linked together, because real-time large data set analysis need to share out the work to tens of, hundreds of or even thousands of computers by the framework as Map Reduce.By the end of 2012, data volume rose to PB (1024TB=1PB), EB (1024PB=1EB) and even ZB (1024EB=1ZB) rank from TB (1024GB=1TB) rank.The result of study of International Data Corporation (IDC) (IDC) shows, the data volume of whole world generation in 2008 is 0.49ZB, and the data volume of 2009 is 0.8ZB, within 2010, increases as 1.2ZB, the quantity of 2011 is especially up to 1.82ZB, and everyone produces data more than 200GB to be equivalent to the whole world.And by 2012, the data volume of all printing materials of human being's production is 200PB, all data volumes that the whole mankind said are in history approximately 5EB.
And scientific research institution's care at present is how to inquire about or to search the valuable data that user is concerned about from googol certificate like this.The structure chart of legacy user's Query Information as shown in Figure 1.User sends request of data by personal computer in network, and data center server receives orders and starts inquiry and the needed data message of search subscriber from storage networking.We know, processor can only directly be processed the data message in internal memory, for googol like this according to amount, in first will the memory from storage networking, unloading be to the internal memory of server, then the processor of server carries out data processing and operation to these information, then result is back to client.Obviously, from storage networking, importing data message server is to be far longer than the data message that server is back to subscription client.For day by day huge data system, the rate limit of processor processes data is storage networking to the restriction that imports data in server memory, no matter because be which kind of memory, traditional disk for example, solid state hard disc, flash memory, and network attached storage (NAS, Network Attached Storage, a kind of data storage server of special use), Direct Attached Storage (DAS, Direct Attached Storage, be that external storage is directly linked a kind of storage organization on server by stube cable) or Redundant Array of Independent Disks (RAID) (RAID, Redundant Array of Independent Disks, allow a plurality of independently hard disks be combined into a hard disk groups by different modes, the more single hard disk of performance of hard disk groups has greatly improved in performance), to its reading and writing data speed, to be far smaller than the read-write speed to internal storage data.At present, in order to improve the speed of processor-server deal with data, can adopt IMC (IMC, In Memory Computation) technology improves processing speed, this technology is by increasing the capacity of internal memory, thereby can the more data volume of disposable importing, thereby accelerate the speed of processor processes data.This method no doubt can be accelerated the speed of data processing, yet for server, its configurable memory size has the upper limit, if capacity reaches the upper limit, only way is exactly the quantity that increases server, and obviously cost price is higher, and because internal memory is the processor of volatibility, need periodic refreshing, thereby power consumption is also very large.
In so huge database, the real interested information of user is often tip of the iceberg, the processor most of the time is in search and inquire about the data that those users really need, these feature operations not need ALU (the Arithmetic Logic Unit of processor-server, ALU) participate in, can say, the processor-server most of the time is all being wasted one's talent on a petty job, and exists performance waste.We know, in storage networking, also exist a large amount of microprocessors, no matter be disk, solid state hard disc, flash memory, or NAS, RAID etc., its inside all disposes microprocessor, and their task manages and controls memory cell exactly, and for example wear leveling, module are selected, error checking error correction, reading and writing data etc.Some microprocessor being present in storage networking is not even second to the processor of some personal computers in performance, compare the processor of server, their integrated technique node is higher, thereby power consumption is also lower, cost is also lower, and most of time, while not needing memory to carry out a large amount of write operation, these microprocessors are in idle condition.
In sum, the microprocessor in storage networking is not fully utilized, and then causes the waste of performance.
Summary of the invention
According to the problems referred to above, the invention provides a kind of method that alleviates data center server task of sharing, wherein, comprise the steps:
Step S1: user sends request of data instruction by electric terminal equipment to the data center server in network;
Step S2: described data center server obtains the required data message of user after receiving described request of data instruction from storage networking, and the required data message of user is fed back to described electric terminal equipment;
Wherein, described data center server disposes first processor, and to carry out data operation Processing tasks, described storage networking disposes the second processor, for storage networking being safeguarded to processing;
When described the second processor is during in idle condition, described data center server is dispensed to described the second processor by the described data operation Processing tasks of part and processes, and can directly user requested data be fed back to described electric terminal equipment by described the second processor.
Above-mentioned method, wherein, described the second processor carries out attended operation task and comprises: wear leveling operation, module are selected operation, error checking error-correction operation and/or data read-write operation.
Above-mentioned method, wherein, the data-handling capacity of described first processor is better than the data-handling capacity of described the second processor.
Above-mentioned method, wherein, described the second processor is the microprocessor configuring in the memory device in described storage networking.
Above-mentioned method, wherein, the memory device in described storage networking is one or more in disk, solid state hard disc, flash memory, and/or one or more in the NAS in data center server, DAS and RAID.
Share a system that alleviates data center server task, wherein, described system comprises:
Electric terminal equipment, data center server and storage networking, user sends request of data instruction by electric terminal equipment to the data center server in network; Described data center server obtains the required data message of user after receiving described request of data instruction from storage networking, and the required data message of user is fed back to described electric terminal equipment;
Wherein, described data center server disposes first processor, to carry out data operation Processing tasks, disposes the second processor in described storage networking, and described the second processor is for safeguarding processing to storage networking;
When this second processor is during in idle condition, described data center server is dispensed to described the second processor by the described data operation Processing tasks of part and processes, and can directly user requested data be fed back to described user terminal by described the second processor.
Above-mentioned system, wherein, described the second processor carries out attended operation task to described data center server and comprises: wear leveling operation, module are selected operation, error checking error-correction operation and/or data read-write operation.
Above-mentioned system, wherein, the data-handling capacity of described first processor is better than the data-handling capacity of described the second processor.
Above-mentioned system, wherein, described the second processor is the microprocessor configuring in the memory device in storage networking.
Above-mentioned system, wherein, the memory device in described storage networking is one or more in disk, solid state hard disc, flash memory, and/or one or more in the NAS in data center server, DAS and RAID.
Because the present invention has adopted as above technical scheme, by utilizing the microprocessor in those storage networkings to process the tasks that some do not need a large amount of participations of data center server ALU, for example data shift, data query etc., thereby alleviated the task of server, improved the performance of system.
Accompanying drawing explanation
By reading the detailed description of non-limiting example being done with reference to the following drawings, it is more obvious that the present invention and feature thereof, profile and advantage will become.In whole accompanying drawings, identical mark is indicated identical part.Deliberately proportionally do not draw accompanying drawing, focus on illustrating purport of the present invention.
The structure chart of Fig. 1 legacy user search data information;
A kind of method schematic diagram that alleviates data center server task of Fig. 2 the present invention;
The processor-server A schematic diagram of executing the task when Fig. 3 task B occupies internal memory;
Fig. 4 utilizes calculating (IMC) technology solution schematic diagram in internal memory;
Fig. 5 utilizes this method schematic diagram that alleviates server task of the present invention;
The processor-server B2 schematic diagram of executing the task when Fig. 6 task B1 occupies internal memory;
Fig. 7 utilizes calculating (IMC) technology solution schematic diagram in internal memory;
Fig. 8 utilizes this method 1 schematic diagram that alleviates server task of the present invention;
Fig. 9 utilizes this method 2 schematic diagrames that alleviate server task of the present invention;
Figure 10 is the structural representation of Magnetic Disk Controler in hard disk;
Figure 11 is the concrete application of the present invention one schematic diagram A;
Figure 12 is the concrete application of the present invention one schematic diagram B;
Figure 13 is the concrete application of the present invention one schematic diagram C.
Embodiment
In the following description, a large amount of concrete details have been provided to more thorough understanding of the invention is provided.Yet, it is obvious to the skilled person that the present invention can be implemented without one or more these details.In other example, for fear of obscuring with the present invention, for technical characterictics more well known in the art, be not described.
Should be understood that, the present invention can be with multi-form enforcement, and should not be interpreted as the embodiment that is confined to propose here.On the contrary, provide these embodiment to expose thorough and complete, and scope of the present invention is fully passed to those skilled in the art.In the accompanying drawings, for clear, the size in Ceng He district and relative size may be exaggerated.Same reference numerals represents identical element from start to finish.
In order thoroughly to understand the present invention, will detailed step and detailed structure be proposed in following description, to explain technical scheme of the present invention.Preferred embodiment of the present invention is described in detail as follows, yet except these are described in detail, the present invention can also have other execution modes.
The invention provides a kind of method that alleviates data center server task of sharing, comprise the steps:
Step S1: user sends request of data instruction by electric terminal equipment to the data center server in network.An optional execution mode is that this electric terminal equipment can be PC or mobile phone.
Step S2: data center server obtains the required data message of user after receiving request of data instruction from storage networking, and the required data message of user is fed back to electric terminal equipment.
In the present invention, data center server disposes first processor, and to carry out data operation Processing tasks, storage networking disposes the second processor, for storage networking being safeguarded to processing; When this second processor is during in idle condition, data center server is dispensed to the second processor by partial data calculation process task and processes, and can directly user requested data be fed back to electric terminal equipment by the second processor.By the second processor, alleviate the pressure of first processor, and then promote whole data center processing speed.
Preferably, the data-handling capacity of above-mentioned first processor is better than the data-handling capacity of the second processor.Further preferred, the second processor is the microprocessor configuring in the memory device being present in storage networking, memory device in storage networking comprises one or more of disk for example, solid state hard disc, flash memory, and all memory devices such as NAS, DAS in some server stores networks and RAID.
Preferably, utilizing the second above-mentioned processor to carry out attended operation task comprises: wear leveling operation, module are selected operation, error checking error-correction operation and/or data read-write operation.
Simultaneously, the present invention also provides a kind of system that alleviates data center server task of sharing, this system comprises: electric terminal equipment, data center server and storage networking, and user sends request of data instruction by electric terminal equipment to the data center server in network; Data center server obtains the required data message of user after receiving request of data instruction from storage networking, and the required data message of user is fed back to electric terminal equipment; Wherein, data center server disposes first processor, to carry out data operation Processing tasks, disposes the second processor in storage networking, and the second processor is for safeguarding processing to storage networking; When this second processor is during in idle condition, data center server is dispensed to the second processor by partial data calculation process task and processes, and can directly user requested data be fed back to user terminal by the second processor.
Preferably, utilizing the second above-mentioned processor to carry out attended operation task comprises: wear leveling operation, module are selected operation, error checking error-correction operation and/or data read-write operation.
Preferably, the data-handling capacity of above-mentioned first processor is better than the data-handling capacity of the second processor.Further preferred, the second processor is the microprocessor configuring in the memory device being present in storage networking, memory device in storage networking comprises for example disk, solid state hard disc, flash memory, and all memory devices such as NAS, DAS in some server stores networks and RAID.
Just the invention provides an embodiment is below further elaborated: the present invention proposes a kind of method of amortization of data central server task.Under traditional approach, the original handled task of server is divided into two parts, a part is task A, and another part is task B.The task that in storage networking, microprocessor completes is referred to as task C.And after the method for this amortization of data central server of application the present invention task, processor in the server A that only finishes the work, and microprocessor in storage networking is shared server task B, as shown in Figure 2, thereby alleviated the task of processor-server, thereby improved systematic function.
Wherein, described task A is tasks that some complicated scientific calculations are processed and that need ALU to participate in a large number, and it does not need a large amount of clusters (burst) request of data; Task B be processor-server except the handled task of task A, be the repeated task of some simple non-science calculation process, often need a large amount of cluster request of data; Task C is traditional maintenance task that the microprocessor in storage networking completes, and for example wear leveling, module are selected, error checking error correction, reading and writing data etc.
The present invention can allow processor-server focus on task A, utilizes the ALU of high-performance data disposal ability to process task A; The Processing tasks B that microprocessor in storage networking can be parallel when Processing tasks C is not idle simultaneously, thereby can reduce the reading speed bottleneck between storage networking and server memory, I/O occupation rate between the power consumption of the large capacity internal memory that reduction is caused by task B and large capacity internal memory and server, thus server system performance improved.
For example, in certain server, the required mass data of task B has occupied memory headroom, if now processor needs Processing tasks A, the needed mass data of task A need to be imported from storage networking to the memory headroom of server, because memory headroom is occupied by the data of task B, the data of task B will be covered by the data of task A so, and schematic diagram as shown in Figure 3.If the required data of task B can often be called, so just need constantly from storage networking, to import internal memory, power consumption also increases thereupon, and in addition, from storage networking, reading out data has limited the processing speed of processor, in performance, also can decrease.If calculate (IMC) technology in employing internal memory, need to increase extra server to reach more memory requirements, as shown in Figure 4.Obvious this mode can increase systematic function, but cost also can increase greatly, and power consumption also can increase.If adopt this method that alleviates data center server task of sharing of the present invention, so without the server that increases expensive large power consumption, data A desired data is imported to server memory, the high-performance processor of server carrys out Processing tasks A, microprocessor in storage networking carrys out Processing tasks B, and the two can executed in parallel, more efficiently and effectively, high-performance, and saved the power consumption of the data importing server memory of task B, result as shown in Figure 5.
Again for example in certain server, the required mass data of task B1 has occupied memory headroom, if now processor needs Processing tasks B2, the needed mass data of task B2 need to be imported from storage networking to the memory headroom of server, because memory headroom is occupied by the data of task B1, the data of task B1 will be covered by the data of task B2 so, and schematic diagram as shown in Figure 6.If the required data of task B1 can often be called, so just need constantly from storage networking, to import internal memory, power consumption also increases thereupon, and in addition, from storage networking, reading out data has limited the processing speed of processor, in performance, also can decrease.If calculate (IMC) technology in employing internal memory, need to increase extra server to reach more memory requirements, as shown in Figure 7.Obvious this mode can increase systematic function, but cost also can increase greatly, and power consumption also can increase.If adopt this method that alleviates data center server task of sharing of the present invention, so without the server that increases expensive large power consumption, the processor of server continues Processing tasks B1, and the microprocessor in storage networking carrys out Processing tasks B2, and result as shown in Figure 8.If further release processor-server task, can also be without the mass data of task B1 and B2 is imported to server memory, only need the multi-microprocessor in storage networking to come executing tasks parallelly B1 and task B2, the processor of server can go to carry out other tasks, and result as shown in Figure 9.Obviously the present invention is this shares that to alleviate the method power consumption of server task lower, and cost is lower, and performance can not reduce yet, as shown in table 1 with computing technique contrast in internal memory.
Table 1
Lifting concrete application one is below described further.
The harddisk memory (Hard Disk) of take is example, in most hard disks, all can there is Magnetic Disk Controler (Hard disk controller) now, its primary structure as shown in figure 10, as can be seen from Figure, this controller has a processor, and has oneself code area and data field (buffer DRAM).Along with memory capacity is increasing, storage organization becomes increasingly complex, also higher to the performance requirement of storage control, RAID for example, the storage controls such as NAS, its internal processor all adopts Pentium series processors or high-performance ARM kernel, carry the even more jumbo internal memory of 2GB, these processors not only can complete its traditional task, picture wear leveling, module is selected, error checking error correction, control reading and writing data etc., some operate compared with complex calculations can also to carry out data query and search etc. completely, and the basic participation without high-performance processor in data center server.Not only can save data and from storage networking, import the time of server memory, but also greatly reduce the power consumption of data importing, discharge processor-server, improve performance.
Provide again an embodiment to be further elaborated the present invention below, suppose that user A, B, C and D carry out internet game, for example playing cards, these also uncomplicated game of fighting landlord, under conventional situation, server must participate in processing a series of operation, data are imported server memory from storage networking, after processing, data are back to user A, B, C or user D, workflow as shown in figure 11.And this method that alleviates data center server task of sharing of application the present invention, user A, B, C and D send request of data, microprocessor in storage networking can be processed these tasks completely, workflow as shown in figure 12, microprocessor in storage networking is processed user data requests, and the data of processing are back to server, server by the transfer of data of processing to user, with Figure 11 process difference be, processor-server is without user's request msg is processed, server only request of data and the data of authorized user returns, user's data processing operation is that the microprocessor in storage networking completes.Further, if user is A, B, C and D are authorized users, without processor-server, participate in so at all, between microprocessor in storage networking and user, can directly carry out transfer of data by south bridge, as shown in figure 13, thereby further avoid data to import server memory from storage networking, also avoided taking processor-server resource, processor-server can go to process other complicated tasks, has improved the performance of system.
To sum up set forth, the present invention proposes this method that alleviates data center server task of sharing, utilize the processor in storage networking to share some better simply tasks of server inner treater, for example data shift, data query etc., thus alleviated the task of server, processor-server can go to carry out other complicated tasks, thereby can greatly reduce power consumption, cost-saving, improve systematic function.
Above preferred embodiment of the present invention is described.It will be appreciated that, the present invention is not limited to above-mentioned specific implementations, and the equipment of wherein not describing in detail to the greatest extent and structure are construed as with the common mode in this area to be implemented; Any those of ordinary skill in the art, do not departing from technical solution of the present invention scope situation, all can utilize method and the technology contents of above-mentioned announcement to make many possible changes and modification to technical solution of the present invention, or being revised as the equivalent embodiment of equivalent variations, this does not affect flesh and blood of the present invention.Therefore, every content that does not depart from technical solution of the present invention,, all still belongs in the scope of technical solution of the present invention protection any simple modification made for any of the above embodiments, equivalent variations and modification according to technical spirit of the present invention.

Claims (10)

1. share a method that alleviates data center server task, it is characterized in that, comprise the steps:
Step S1: user sends request of data instruction by electric terminal equipment to the data center server in network;
Step S2: described data center server obtains the required data message of user after receiving described request of data instruction from storage networking, and the required data message of user is fed back to described electric terminal equipment;
Wherein, described data center server disposes first processor, to carry out data operation Processing tasks; Described storage networking disposes the second processor, for storage networking being safeguarded to processing;
When described the second processor is during in idle condition, described data center server is dispensed to described the second processor by the described data operation Processing tasks of part and processes, and directly user requested data is fed back to described electric terminal equipment by described the second processor.
2. the method for claim 1, is characterized in that, described the second processor carries out attended operation task and comprises: wear leveling operation, module are selected operation, error checking error-correction operation and/or data read-write operation.
3. the method for claim 1, is characterized in that, the data-handling capacity of described first processor is better than the data-handling capacity of described the second processor.
4. the method for claim 1, is characterized in that, described the second processor is the microprocessor configuring in the memory device in described storage networking.
5. method as claimed in claim 4, is characterized in that, the memory device in described storage networking is one or more in disk, solid state hard disc, flash memory, and/or one or more in the NAS in data center server, DAS and RAID.
6. share a system that alleviates data center server task, it is characterized in that, described system comprises:
Electric terminal equipment, data center server and storage networking, user sends request of data instruction by electric terminal equipment to the data center server in network; Described data center server obtains the required data message of user after receiving described request of data instruction from storage networking, and the required data message of user is fed back to described electric terminal equipment;
Wherein, described data center server disposes first processor, to carry out data operation Processing tasks; In described storage networking, dispose the second processor, described the second processor is for safeguarding processing to storage networking;
When this second processor is during in idle condition, described data center server is dispensed to described the second processor by the described data operation Processing tasks of part and processes, and directly user requested data is fed back to described user terminal by described the second processor.
7. system as claimed in claim 6, is characterized in that, described the second processor carries out attended operation task to described data center server and comprises: wear leveling operation, module are selected operation, error checking error-correction operation and/or data read-write operation.
8. system as claimed in claim 6, is characterized in that, the data-handling capacity of described first processor is better than the data-handling capacity of described the second processor.
9. system as claimed in claim 6, is characterized in that, described the second processor is the microprocessor configuring in the memory device in storage networking.
10. system as claimed in claim 9, is characterized in that, the memory device in described storage networking is one or more in disk, solid state hard disc, flash memory, and/or one or more in the NAS in data center server, DAS and RAID.
CN201410394960.0A 2014-08-12 2014-08-12 It is a kind of to share the method and system for mitigating data center server task Active CN104158875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410394960.0A CN104158875B (en) 2014-08-12 2014-08-12 It is a kind of to share the method and system for mitigating data center server task

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410394960.0A CN104158875B (en) 2014-08-12 2014-08-12 It is a kind of to share the method and system for mitigating data center server task

Publications (2)

Publication Number Publication Date
CN104158875A true CN104158875A (en) 2014-11-19
CN104158875B CN104158875B (en) 2018-04-20

Family

ID=51884280

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410394960.0A Active CN104158875B (en) 2014-08-12 2014-08-12 It is a kind of to share the method and system for mitigating data center server task

Country Status (1)

Country Link
CN (1) CN104158875B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104571946A (en) * 2014-11-28 2015-04-29 中国科学院上海微系统与信息技术研究所 Memory device supporting quick query of logical circuit and access method of memory device
CN104991630A (en) * 2015-06-15 2015-10-21 上海新储集成电路有限公司 Method and server structure for reducing processor load in server
CN105022698A (en) * 2015-06-26 2015-11-04 上海新储集成电路有限公司 Method for storing special function data by using last level of hybrid cache
CN105740134A (en) * 2016-01-29 2016-07-06 浪潮(北京)电子信息产业有限公司 File based testing method and apparatus
CN107276912A (en) * 2016-04-07 2017-10-20 华为技术有限公司 Memory, message processing method and distributed memory system
CN110083480A (en) * 2019-04-16 2019-08-02 上海新储集成电路有限公司 A kind of configurable multi-functional data processing unit

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1619478A (en) * 2003-11-21 2005-05-25 株式会社日立制作所 Cluster-type storage system and management method thereof
CN1673975A (en) * 2004-03-25 2005-09-28 株式会社日立制作所 Storage system
CN101038590A (en) * 2007-04-13 2007-09-19 武汉大学 Space data clustered storage system and data searching method
CN101105711A (en) * 2006-07-13 2008-01-16 国际商业机器公司 System and method for distributing processing function between main processor and assistant processor
CN201011563Y (en) * 2006-12-01 2008-01-23 华中科技大学 Disk array assistant processing controlling card
CN102760045A (en) * 2011-04-29 2012-10-31 无锡江南计算技术研究所 Intelligent storage device and data processing method thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1619478A (en) * 2003-11-21 2005-05-25 株式会社日立制作所 Cluster-type storage system and management method thereof
CN1673975A (en) * 2004-03-25 2005-09-28 株式会社日立制作所 Storage system
CN101105711A (en) * 2006-07-13 2008-01-16 国际商业机器公司 System and method for distributing processing function between main processor and assistant processor
CN201011563Y (en) * 2006-12-01 2008-01-23 华中科技大学 Disk array assistant processing controlling card
CN101038590A (en) * 2007-04-13 2007-09-19 武汉大学 Space data clustered storage system and data searching method
CN102760045A (en) * 2011-04-29 2012-10-31 无锡江南计算技术研究所 Intelligent storage device and data processing method thereof

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104571946A (en) * 2014-11-28 2015-04-29 中国科学院上海微系统与信息技术研究所 Memory device supporting quick query of logical circuit and access method of memory device
CN104571946B (en) * 2014-11-28 2017-06-27 中国科学院上海微系统与信息技术研究所 A kind of storage arrangement and its access method for supporting logic circuit quick search
CN104991630A (en) * 2015-06-15 2015-10-21 上海新储集成电路有限公司 Method and server structure for reducing processor load in server
CN104991630B (en) * 2015-06-15 2018-02-27 上海新储集成电路有限公司 The method and server architecture of processor load in a kind of reduction server
CN105022698A (en) * 2015-06-26 2015-11-04 上海新储集成电路有限公司 Method for storing special function data by using last level of hybrid cache
CN105022698B (en) * 2015-06-26 2020-06-19 上海新储集成电路有限公司 Method for storing special function data by using last-level mixed cache
CN105740134A (en) * 2016-01-29 2016-07-06 浪潮(北京)电子信息产业有限公司 File based testing method and apparatus
CN107276912A (en) * 2016-04-07 2017-10-20 华为技术有限公司 Memory, message processing method and distributed memory system
CN107276912B (en) * 2016-04-07 2021-08-27 华为技术有限公司 Memory, message processing method and distributed storage system
CN110083480A (en) * 2019-04-16 2019-08-02 上海新储集成电路有限公司 A kind of configurable multi-functional data processing unit
CN110083480B (en) * 2019-04-16 2023-08-18 上海新储集成电路有限公司 Configurable multifunctional data processing unit

Also Published As

Publication number Publication date
CN104158875B (en) 2018-04-20

Similar Documents

Publication Publication Date Title
US11487760B2 (en) Query plan management associated with a shared pool of configurable computing resources
US11476869B2 (en) Dynamically partitioning workload in a deep neural network module to reduce power consumption
Kambatla et al. Trends in big data analytics
Ibrahim et al. Evaluating mapreduce on virtual machines: The hadoop case
CN104158875A (en) Method and system for sharing and reducing tasks of data center server
Slagter et al. An improved partitioning mechanism for optimizing massive data analysis using MapReduce
Dudin et al. A review of cloud computing
CN105045607A (en) Method for achieving uniform interface of multiple big data calculation frames
US11494237B2 (en) Managing workloads of a deep neural network processor
US10579419B2 (en) Data analysis in storage system
Premchaiswadi et al. Optimizing and tuning MapReduce jobs to improve the large‐scale data analysis process
Jia Google cloud computing platform technology architecture and the impact of its cost
US9760577B2 (en) Write-behind caching in distributed file systems
CN105426119A (en) Storage apparatus and data processing method
HeydariGorji et al. In-storage processing of I/O intensive applications on computational storage drives
Becker et al. Memory-driven computing accelerates genomic data processing
Zhang et al. Artificial intelligence platform for mobile service computing
Segall et al. Overview of big data-intensive storage and its technologies for cloud and fog computing
Jin et al. Optimization of task assignment strategy for map-reduce
Fuzong et al. Dynamic data compression algorithm selection for big data processing on local file system
US20140237149A1 (en) Sending a next request to a resource before a completion interrupt for a previous request
Talan et al. An overview of Hadoop MapReduce, spark, and scalable graph processing architecture
Bhushan et al. Cost based model for big data processing with hadoop architecture
Craddock et al. The Case for Physical Memory Pools: A Vision Paper
Hsieh et al. Energy-saving cloud computing platform based on micro-embedded system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221101

Address after: 4 / F, building 2, Hunan scientific research achievements transformation center workshop, Longping high tech park, Furong district, Changsha City, Hunan Province 410000

Patentee after: HUNAN QINHAI DIGITAL Co.,Ltd.

Address before: 201500 No. 8, building 2, No. 6505, Tingwei Road, Jinshan District, Shanghai

Patentee before: SHANGHAI XINCHU INTEGRATED CIRCUIT Co.,Ltd.

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: No. 338, Zhanggongling Road, Longping High tech Park, Furong District, Changsha, Hunan 410000

Patentee after: Hunan Qinhai Digital Co.,Ltd.

Address before: 4 / F, building 2, Hunan scientific research achievements transformation center workshop, Longping high tech park, Furong district, Changsha City, Hunan Province 410000

Patentee before: HUNAN QINHAI DIGITAL Co.,Ltd.