CN104158875B - It is a kind of to share the method and system for mitigating data center server task - Google Patents

It is a kind of to share the method and system for mitigating data center server task Download PDF

Info

Publication number
CN104158875B
CN104158875B CN201410394960.0A CN201410394960A CN104158875B CN 104158875 B CN104158875 B CN 104158875B CN 201410394960 A CN201410394960 A CN 201410394960A CN 104158875 B CN104158875 B CN 104158875B
Authority
CN
China
Prior art keywords
data
processor
task
server
data center
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410394960.0A
Other languages
Chinese (zh)
Other versions
CN104158875A (en
Inventor
景蔚亮
陈邦明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HUNAN QINHAI DIGITAL CO Ltd
Original Assignee
Shanghai Xinchu Integrated Circuit Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xinchu Integrated Circuit Co Ltd filed Critical Shanghai Xinchu Integrated Circuit Co Ltd
Priority to CN201410394960.0A priority Critical patent/CN104158875B/en
Publication of CN104158875A publication Critical patent/CN104158875A/en
Application granted granted Critical
Publication of CN104158875B publication Critical patent/CN104158875B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention provides a kind of method and system for sharing mitigation data center server task, share server inner treater some better simply tasks using the processor stored in network, processor in server can go to perform other complicated tasks, so as to substantially reduce power consumption, it is cost-effective, improve system performance.

Description

It is a kind of to share the method and system for mitigating data center server task
Technical field
The present invention relates to a kind of method for sharing mitigation data center server task, more particularly, to one kind based on storage Microprocessor in network mitigates the method and system of data center server task to share.
Background technology
With the arriving of cloud era, big data (big data) word is mentioned more and more, and people are described with it The mass data produced with the definition information explosion epoch, and name associated technology to develop and innovate.Data are rapid Expand and become larger, it decides the future development of enterprise, although many enterprises may not recognize that data explosion increases The hidden danger of problem is brought, but over time, people will more and more recognize importance of the data to enterprise.Just Such as《The New York Times》Alleged in one special column of 2 months 2012, " big data " epoch have come, business, it is economical and other In field, decision-making will be increasingly based on data and analysis and make, and be not based on experience and intuition.Big data analysis often and cloud meter It is linked together, because large data set analysis needs to come to tens of, hundreds of as the frame Map Reduce in real time Or even thousands of computers shares out the work.By the end of 2012, data volume was risen to PB from TB (1024GB=1TB) rank (1024TB=1PB), EB (1024PB=1EB) or even ZB (1024EB=1ZB) rank.The research of International Data Corporation (IDC) (IDC) The result shows that the data volume that the whole world in 2008 produces is 0.49ZB, the data volume of 2009 is 0.8ZB, and growth in 2010 is 1.2ZB, the quantity of 2011 are more up to 1.82ZB, and equivalent to the whole world, everyone produces the data of more than 200GB.And by 2012 Untill, the data volume of all printing materials of human being's production is 200PB, and all data volumes that the whole mankind said in history are big It is approximately 5EB.
And it would be of interest to how inquire about or search having for user's care from so huge data for scientific research at present It is worth data.The structure chart of legacy user's Query Information is as shown in Figure 1.User sends data into network by personal computer Request, data center server receive order and start inquiry and the search required data message of user from storage network.I Know, processor can only directly handle the data message in memory, first will be from storage network for so huge data volume In memory in be transferred in the memory of server, then the processor of server carries out data processing and behaviour to these information Make, then result is back to client.Obviously, the data message imported from storage network in server is to be far longer than server It is back to the data message of subscription client.For increasingly huge data system, the speed of processor processes data Bottleneck is to store the limitation that network imports data into server memory, because whether which kind of memory, such as traditional magnetic Disk, solid state hard disc, flash memory, and network attached storage (NAS, Network Attached Storage, a kind of dedicated data Storage server), Direct Attached Storage (DAS, Direct Attached Storage, i.e. external storage pass through connect electricity Cable is attached directly to a kind of storage organization on server) or redundant array of independent disks (RAID, Redundant Array of Independent Disks, allow multiple independent hard disks to be combined into a hard disk groups by different modes, the performances of hard disk groups compared with Single hard disk has greatly improved in performance), the read-write speed to internal storage data is far smaller than to its reading and writing data speed Rate.At present, in order to improve the speed of processor-server processing data, IMC (IMC, In Memory can be used Computation) technology improves processing speed, which is the capacity by increasing memory, so as to disposably import More data volumes, so as to accelerate the speed of processor processes data.This method can no doubt accelerate the speed of data processing, But for server, its configurable memory size has the upper limit, if capacity reaches the upper limit, only way is just It is the quantity for increasing server, it is clear that cost price is higher, and since memory is the processor of volatibility, it is necessary to periodically brush Newly, thus power consumption is also very big.
The real information interested of user is often tip of the iceberg in so huge database, when processor is most of Between in the data searched for and inquired about those users and really need, the ALU of these feature operations and non-required processor-server (Arithmetic Logic Unit, arithmetic logic unit) participates in, it may be said that the processor-server most of the time is all in quilt Waste one's talent on a petty job, there is waist performance.It is known that there is also substantial amounts of microprocessor in network is stored, whether magnetic Disk, solid state hard disc, flash memory, or NAS, RAID etc., are equipped with microprocessor inside it, their task is exactly single to storage Member is managed and controls, such as the selection of wear leveling, module, error checking error correction, reading and writing data etc..Some are present in The microprocessor in network is stored up in performance even no less than the processor of some personal computers, compared to the processor of server For, their integrated technique node is higher, thus power consumption is relatively low, and cost is also lower, and most of the time, i.e., not When needing to carry out a large amount of write operations to memory, these microprocessors are in idle condition.
In conclusion the microprocessor in storage network is not fully utilized, the waste of performance is in turn resulted in.
The content of the invention
According to the above problem, the present invention provides it is a kind of share mitigate the method for data center server task, wherein, bag Include following steps:
Step S1:User sends data requesting instructions by data center server of the electric terminal equipment into network;
Step S2:The data center server obtains user after receiving the data requesting instructions from storage network Required data message, and the data message needed for user is fed back into the electric terminal equipment;
Wherein, the data center server is configured with first processor, described to deposit to carry out data operation processing task Storage network configuration has second processor, for carrying out maintenance processing to storage network;
When the second processor is in idle condition, the data center server is by the data operation of part Reason task is distributed to the second processor and handled, and can be by the second processor directly by user requested data Feed back to the electric terminal equipment.
Above-mentioned method, wherein, the second processor, which carries out attended operation task, to be included:Wear leveling operation, module Selection operation, error checking error-correction operation and/or data read-write operation.
Above-mentioned method, wherein, the data-handling capacity of the first processor is better than the data of the second processor Disposal ability.
Above-mentioned method, wherein, the second processor is micro- by being configured in the storage device in the storage network Processor.
Above-mentioned method, wherein, the storage device in the storage network is disk, one kind in solid state hard disc, flash memory Or it is a variety of, and/or the one or more in NAS, DAS and RAID in data center server.
It is a kind of to share the system for mitigating data center server task, wherein, the system comprises:
Electric terminal equipment, data center server and storage network, user is by electric terminal equipment into network Data center server sends data requesting instructions;The data center server is received after the data requesting instructions from depositing The data message obtained in network needed for user is stored up, and the data message needed for user is fed back into the electric terminal equipment;
Wherein, the data center server is configured with first processor, described to deposit to carry out data operation processing task Second processor is configured with storage network, the second processor is used to carry out maintenance processing to storage network;
When the second processor is in idle condition, the data center server handles the part data operation Task is distributed to the second processor and handled, and can be anti-directly by user requested data by the second processor It is fed to the user terminal.
Above-mentioned system, wherein, the second processor carries out attended operation task bag to the data center server Include:Wear leveling operation, module selection operation, error checking error-correction operation and/or data read-write operation.
Above-mentioned system, wherein, the data-handling capacity of the first processor is better than the data of the second processor Disposal ability.
Above-mentioned system, wherein, the second processor is the microprocessor that is configured in the storage device in storage network Device.
Above-mentioned system, wherein, the storage device in the storage network is disk, one kind in solid state hard disc, flash memory Or it is a variety of, and/or the one or more in NAS, DAS and RAID in data center server.
Since present invention employs as above technical solution, one is handled by using the microprocessor in those storage networks Task a little and being not required data center server ALU largely to participate in, such as data transfer, data query etc., so as to alleviate The task of server, improves the performance of system.
Brief description of the drawings
Upon reading the detailed description of non-limiting embodiments with reference to the following drawings, the present invention and its feature, outer Shape and advantage will become more apparent upon.Identical mark indicates identical part in whole attached drawings.Not deliberately proportionally Draw attached drawing, it is preferred that emphasis is the purport of the present invention is shown.
Fig. 1 legacy users search for the structure chart of data message;
A kind of method schematic diagram for mitigating data center server task of Fig. 2 present invention;
Processor-server performs task A schematic diagrames when Fig. 3 tasks B occupies memory;
Fig. 4, which is utilized, calculates (IMC) technology solution schematic diagram in memory;
Fig. 5 utilizes this method schematic diagram for mitigating server task of the present invention;
Processor-server performs task B2 schematic diagrames when Fig. 6 tasks B1 occupies memory;
Fig. 7, which is utilized, calculates (IMC) technology solution schematic diagram in memory;
Fig. 8 utilizes this 1 schematic diagram of method for mitigating server task of the present invention;
Fig. 9 utilizes this 2 schematic diagram of method for mitigating server task of the present invention;
Figure 10 is the structure diagram of Magnetic Disk Controler in hard disk;
Figure 11 is one schematic diagram A of concrete application of the present invention;
Figure 12 is one schematic diagram B of concrete application of the present invention;
Figure 13 is one schematic diagram C of concrete application of the present invention.
Embodiment
In the following description, a large amount of concrete details are given in order to provide more thorough understanding of the invention.So And it is obvious to the skilled person that the present invention may not need one or more of these details and be able to Implement.In other examples, in order to avoid with the present invention obscure, for some technical characteristics well known in the art not into Row description.
It should be appreciated that the present invention can be implemented in different forms, and it should not be construed as being limited to what is proposed here Embodiment.On the contrary, providing these embodiments disclosure will be made thoroughly and complete, and will fully convey the scope of the invention to Those skilled in the art.In the accompanying drawings, for clarity, the size and relative size in Ceng He areas may be exaggerated.From beginning to end Same reference numerals represent identical element.
In order to thoroughly understand the present invention, detailed step and detailed structure will be proposed in following description, so as to Explain technical scheme.Presently preferred embodiments of the present invention is described in detail as follows, but in addition to these detailed descriptions, this Invention can also have other embodiment.
The present invention provides a kind of method for sharing mitigation data center server task, include the following steps:
Step S1:User sends data requesting instructions by data center server of the electric terminal equipment into network. One optional embodiment is that the electric terminal equipment can be PC or mobile phone.
Step S2:Data center server receives after data requesting instructions the number obtained from storage network needed for user It is believed that breath, and the data message needed for user is fed back into electric terminal equipment.
In the present invention, data center server is configured with first processor, to carry out data operation processing task, storage Network configuration has second processor, for carrying out maintenance processing to storage network;When the second processor is in idle condition, Partial data calculation process task is distributed to second processor and handled by data center server, and can be by second User requested data is directly fed back to electric terminal equipment by reason device.Mitigate the pressure of first processor by second processor Power, and then lift whole data center's processing speed.
Preferably, the data-handling capacity of above-mentioned first processor is better than the data-handling capacity of second processor.Into One step is preferable, and second processor is the microprocessor configured in the storage device being present in storage network, stores network In storage device include the one or more of such as disk, solid state hard disc, flash memory, and in some server storage networks All storage devices such as NAS, DAS and RAID.
Preferably, carrying out attended operation task using above-mentioned second processor includes:Wear leveling operation, module selection Operation, error checking error-correction operation and/or data read-write operation.
Meanwhile present invention also offers it is a kind of share mitigate the system of data center server task, which to include:Electricity Sub- terminal device, data center server and storage network, user are taken by data center of the electric terminal equipment into network Business device sends data requesting instructions;Data center server obtains user institute after receiving data requesting instructions from storage network The data message needed, and the data message needed for user is fed back into electric terminal equipment;Wherein, data center server configures There is first processor, to carry out data operation processing task, store in network and be configured with second processor, second processor is used for Maintenance processing is carried out to storage network;When the second processor is in idle condition, data center server is by partial data Calculation process task, which is distributed to second processor, to be handled, and can be anti-directly by user requested data by second processor It is fed to user terminal.
Preferably, carrying out attended operation task using above-mentioned second processor includes:Wear leveling operation, module selection Operation, error checking error-correction operation and/or data read-write operation.
Preferably, the data-handling capacity of above-mentioned first processor is better than the data-handling capacity of second processor.Into One step is preferable, and second processor is the microprocessor configured in the storage device being present in storage network, stores network In storage device include such as disk, solid state hard disc, flash memory, and some server storage networks in NAS, DAS and All storage devices such as RAID.
Just one embodiment of present invention offer is further elaborated below:The present invention proposes a kind of amortization of data center service Have a high regard for the method for business.Under traditional approach, the original handled task of server is divided into two parts, a part is task A, another Part is task B.The task that microprocessor is completed in storage network is referred to as task C.And apply the present invention is this to share number After method according to central server task, the processor in server only completes task A, and stores the microprocessor in network Share server task B, as shown in Fig. 2, so as to alleviate the task of processor-server, so as to improve system performance.
Wherein, the task A is some complicated tasks that are scientific calculation processing and needing ALU largely to participate in, it is not Need a large amount of clusters (burst) request of data;Task B is task of the processor-server handled by except task A, is some letters The repeated task of single non-science calculation process, generally requires substantial amounts of company-data request;Task C is in storage network The conventional maintenance task completed of microprocessor, such as wear leveling, module selection, error checking error correction, reading and writing data etc..
The present invention can allow processor-server focus on task A, i.e., using the ALU of high-performance data disposal ability come pair Task A processing;The processing task B that the microprocessor in network can be parallel when not handling the task C i.e. free time is stored at the same time, So as to reduce the reading speed bottleneck between storage network and server memory, reduce as caused by task B in large capacity I/O occupation rates between the power consumption and large capacity memory and server deposited, so as to improve server system performance.
Such as in certain server, the mass data needed for task B occupies memory headroom, if processor needs at this time The required mass data of task A, it is necessary to be imported the memory headroom of server, due to interior by processing task A from storage network Space is deposited occupied by the data of task B, then the data of task B will be covered by the data of task A, schematic diagram such as Fig. 3 It is shown.If the data needed for task B can be called often, then just need constantly to import memory, power consumption from storage network Also increase therewith, in addition, reading the processing speed that data limit processor from storage network, can also be dropped therewith in performance It is low.According to (IMC) technology is calculated in memory, that is, need to increase extra server to reach more memory requirements, such as Fig. 4 It is shown.Obvious this mode can increase system performance, but cost can also greatly increase, and power consumption can also increase.According to this hair The bright this method for sharing mitigation data center server task, then without increasing the server of the big power consumption of high cost, by number According to data needed for A import server memory, the high-performance processor of server handles task A, by micro- in storage network Device is managed to handle task B, the two can be performed parallel, more efficient rapid, high-performance, and be eliminated the data of task B The power consumption of server memory is imported, the results are shown in Figure 5.
In another example in certain server, the mass data needed for task B1 occupies memory headroom, if processor at this time Processing task B2 is needed, it is necessary to which the required mass data of task B2 to be imported to the memory headroom of server from storage network, Since memory headroom is occupied by the data of task B1, then the data of task B1 will be covered by the data of task B2, be shown It is intended to as shown in Figure 6.If the data needed for task B1 can be called often, then just need constantly to import from storage network Memory, power consumption also increase therewith, in addition, the processing speed that data limit processor is read from storage network, in performance It can decrease.According to (IMC) technology is calculated in memory, that is, needing increases extra server is needed with reaching more memories Ask, as shown in Figure 7.Obvious this mode can increase system performance, but cost can also greatly increase, and power consumption can also increase.If Using this method for sharing mitigation data center server task of the present invention, then without increasing the service of the big power consumption of high cost Device, the processor of server continue with task B1, store the microprocessor in network to handle task B2, as a result such as Fig. 8 institutes Show.If, can also be without the mass data of task B1 and B2 be imported service in order to further discharge processor-server task Device memory, the multi-microprocessor that need to only store in network come executing tasks parallelly B1 and task B2, and the processor of server can To go to perform other tasks, the results are shown in Figure 9.This method power consumption for sharing mitigation server task of the obvious present invention is more Low, cost is lower, and performance will not reduce, as shown in table 1 with computing technique contrast in memory.
Table 1
Concrete application one is named to be described further.
By taking harddisk memory (Hard Disk) as an example, can all there is Magnetic Disk Controler (Hard in present most hard disks Disk controller), its primary structure is as shown in Figure 10, and as can be seen from Figure, which has a processor, and And possess the code area of oneself and data field (buffer DRAM).As memory capacity is increasing, storage organization is more and more multiple Miscellaneous, also higher to the performance requirement of storage control, such as the storage control such as RAID, NAS, its internal processor is all using running quickly Series processors or high-performance ARM kernels are risen, carries 2GB even memories of larger capacity, these processors not only can be complete The task traditional into its, as wear leveling, module selection, error checking error correction, control reading and writing data etc., can also carry out completely Some more complicated arithmetic operations such as data query and search, and at all without high-performance processor in data center server Participate in.Data can not only be saved and import the time of server memory from storage network, but also be also greatly reduced data and lead The power consumption entered, releases processor-server, improves performance.
An embodiment is provided again below the present invention to be further elaborated, it is assumed that user A, B, C and D carry out networking trip Play, such as playing cards, fighting landlord these and uncomplicated game, under conventional situation, server necessarily participates in a series of behaviour of processing Make, data are imported in server memory from storage network, by the way that data are back to user A, B, C or user D, work after processing It is as shown in figure 11 to make flow.And apply the present invention it is this share mitigate data center server task method, user A, B, C and D sends request of data, and the microprocessor stored in network can handle these tasks completely, and workflow is as shown in figure 12, deposits Microprocessor in storage network handles user data requests, and processed data are back to server, server By processed data transfer to user, with Figure 11 processes the difference is that processor-server is without to user's request data Handled, only the request of data of authorized user and data return server, and the data processing operation of user is by storage net What the microprocessor in network was completed.Further, if user A, B, C and D are authorized users, then at all without server at Manage device to participate in, store and can directly be carried out data transmission by south bridge between the microprocessor in network and user, as shown in figure 13, Data are imported in server memory from storage network so as to further avoid, also avoid taking server processor resources, Processor-server can go to handle other complicated tasks, improve the performance of system.
To sum up illustrate, the present invention proposes this method shared and mitigate data center server task, using storing network In processor share server inner treater some better simply tasks, such as data transfer, data query etc., so as to subtract The light task of server, processor-server can go to perform other complicated tasks, so as to substantially reduce power consumption, section About cost, improves system performance.
Presently preferred embodiments of the present invention is described above.It is to be appreciated that the invention is not limited in above-mentioned Particular implementation, wherein the equipment and structure be not described in detail to the greatest extent are construed as giving reality with the common mode in this area Apply;Any those skilled in the art, without departing from the scope of the technical proposal of the invention, all using the disclosure above Methods and technical content many possible changes and modifications are made to technical solution of the present invention, or be revised as equivalent variations etc. Embodiment is imitated, this has no effect on the substantive content of the present invention.Therefore, every content without departing from technical solution of the present invention, foundation The technical spirit of the present invention still falls within the present invention to any simple modifications, equivalents, and modifications made for any of the above embodiments In the range of technical solution protection.

Claims (2)

1. a kind of share the method for mitigating data center server task, it is characterised in that includes the following steps:
Step S1:User sends data requesting instructions by data center server of the electric terminal equipment into network;
Step S2:The data center server is obtained needed for user after receiving the data requesting instructions from storage network Data message, and the data message needed for user is fed back into the electric terminal equipment;
Wherein, the data center server is configured with first processor, to carry out data operation processing task;The storage net Network is configured with second processor, for carrying out maintenance processing to storage network;
When the second processor is in idle condition, the data center server appoints the part data operation processing Business distribution to the second processor is handled, and user requested data directly is fed back to institute by the second processor State electric terminal equipment;
The second processor, which carries out attended operation task, to be included:Wear leveling operation, module selection operation, error checking error correction Operation and/or data read-write operation;
The data-handling capacity of the first processor is better than the data-handling capacity of the second processor;
The second processor is by the microprocessor that configures in the storage device in the storage network;
Storage device in the storage network is disk, the one or more in solid state hard disc, flash memory, and/or data center The one or more in NAS, DAS and RAID in server.
2. a kind of share the system for mitigating data center server task, it is characterised in that the system comprises:
Electric terminal equipment, data center server and storage network, data of the user by electric terminal equipment into network Central server sends data requesting instructions;The data center server is received after the data requesting instructions from storage net The data message needed for user is obtained in network, and the data message needed for user is fed back into the electric terminal equipment;
Wherein, the data center server is configured with first processor, to carry out data operation processing task;The storage net Second processor is configured with network, the second processor is used to carry out maintenance processing to storage network;
When the second processor is in idle condition, the part data operation is handled task by the data center server Distribution is handled to the second processor, and by the second processor is directly fed back to user requested data described User terminal;
The second processor carries out attended operation task to the data center server to be included:Wear leveling operation, module Selection operation, error checking error-correction operation and/or data read-write operation;
The data-handling capacity of the first processor is better than the data-handling capacity of the second processor;
The second processor is the microprocessor that is configured in the storage device in storage network;
Storage device in the storage network is disk, the one or more in solid state hard disc, flash memory, and/or data center The one or more in NAS, DAS and RAID in server.
CN201410394960.0A 2014-08-12 2014-08-12 It is a kind of to share the method and system for mitigating data center server task Active CN104158875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410394960.0A CN104158875B (en) 2014-08-12 2014-08-12 It is a kind of to share the method and system for mitigating data center server task

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410394960.0A CN104158875B (en) 2014-08-12 2014-08-12 It is a kind of to share the method and system for mitigating data center server task

Publications (2)

Publication Number Publication Date
CN104158875A CN104158875A (en) 2014-11-19
CN104158875B true CN104158875B (en) 2018-04-20

Family

ID=51884280

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410394960.0A Active CN104158875B (en) 2014-08-12 2014-08-12 It is a kind of to share the method and system for mitigating data center server task

Country Status (1)

Country Link
CN (1) CN104158875B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104571946B (en) * 2014-11-28 2017-06-27 中国科学院上海微系统与信息技术研究所 A kind of storage arrangement and its access method for supporting logic circuit quick search
CN104991630B (en) * 2015-06-15 2018-02-27 上海新储集成电路有限公司 The method and server architecture of processor load in a kind of reduction server
CN105022698B (en) * 2015-06-26 2020-06-19 上海新储集成电路有限公司 Method for storing special function data by using last-level mixed cache
CN105740134B (en) * 2016-01-29 2018-03-20 浪潮(北京)电子信息产业有限公司 A kind of method of testing and device based on file
CN107276912B (en) * 2016-04-07 2021-08-27 华为技术有限公司 Memory, message processing method and distributed storage system
CN110083480B (en) * 2019-04-16 2023-08-18 上海新储集成电路有限公司 Configurable multifunctional data processing unit

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1619478A (en) * 2003-11-21 2005-05-25 株式会社日立制作所 Cluster-type storage system and management method thereof
CN1673975A (en) * 2004-03-25 2005-09-28 株式会社日立制作所 Storage system
CN101038590A (en) * 2007-04-13 2007-09-19 武汉大学 Space data clustered storage system and data searching method
CN101105711A (en) * 2006-07-13 2008-01-16 国际商业机器公司 System and method for distributing processing function between main processor and assistant processor
CN201011563Y (en) * 2006-12-01 2008-01-23 华中科技大学 Disk array assistant processing controlling card
CN102760045A (en) * 2011-04-29 2012-10-31 无锡江南计算技术研究所 Intelligent storage device and data processing method thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1619478A (en) * 2003-11-21 2005-05-25 株式会社日立制作所 Cluster-type storage system and management method thereof
CN1673975A (en) * 2004-03-25 2005-09-28 株式会社日立制作所 Storage system
CN101105711A (en) * 2006-07-13 2008-01-16 国际商业机器公司 System and method for distributing processing function between main processor and assistant processor
CN201011563Y (en) * 2006-12-01 2008-01-23 华中科技大学 Disk array assistant processing controlling card
CN101038590A (en) * 2007-04-13 2007-09-19 武汉大学 Space data clustered storage system and data searching method
CN102760045A (en) * 2011-04-29 2012-10-31 无锡江南计算技术研究所 Intelligent storage device and data processing method thereof

Also Published As

Publication number Publication date
CN104158875A (en) 2014-11-19

Similar Documents

Publication Publication Date Title
CN104158875B (en) It is a kind of to share the method and system for mitigating data center server task
US11030131B2 (en) Data processing performance enhancement for neural networks using a virtualized data iterator
US8590050B2 (en) Security compliant data storage management
Wang et al. A parallel file system with application-aware data layout policies for massive remote sensing image processing in digital earth
US10318346B1 (en) Prioritized scheduling of data store access requests
CN105843841A (en) Small file storing method and system
Zhang et al. Oceanrt: Real-time analytics over large temporal data
Wang et al. Bio-inspired cost-effective access to big data
US11494237B2 (en) Managing workloads of a deep neural network processor
CN105426119A (en) Storage apparatus and data processing method
CN107895044A (en) A kind of database data processing method, device and system
Marquez et al. An Intelligent Approach to Resource Allocation on Heterogeneous Cloud Infrastructures
Cheng et al. Accelerating scientific workflows with tiered data management system
Imran et al. Big data analytics tools and platform in big data landscape
Martin et al. Multi-temperate logical data warehouse design for large-scale healthcare data
CN104636209B (en) The resource scheduling system and method optimized based on big data and cloud storage system directional properties
Jiyani et al. NAM: a nearest acquaintance modeling approach for VM allocation using R-Tree
Jin et al. Optimization of task assignment strategy for map-reduce
CN105224596A (en) A kind of method of visit data and device
Di Modica et al. A hierarchical hadoop framework to process geo-distributed big data
CN106033434A (en) Virtual asset data replica processing method based on data size and popularity
Lu et al. Collective input/output under memory constraints
US10996945B1 (en) Splitting programs into distributed parts
SURESH et al. A New Exploiting Dynamic Resource Allocation for Efficient Parallel Data Processing in the Cloud
WO2020147807A1 (en) Bucketizing data into buckets for processing by code modules

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221101

Address after: 4 / F, building 2, Hunan scientific research achievements transformation center workshop, Longping high tech park, Furong district, Changsha City, Hunan Province 410000

Patentee after: HUNAN QINHAI DIGITAL Co.,Ltd.

Address before: 201500 No. 8, building 2, No. 6505, Tingwei Road, Jinshan District, Shanghai

Patentee before: SHANGHAI XINCHU INTEGRATED CIRCUIT Co.,Ltd.

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: No. 338, Zhanggongling Road, Longping High tech Park, Furong District, Changsha, Hunan 410000

Patentee after: Hunan Qinhai Digital Co.,Ltd.

Address before: 4 / F, building 2, Hunan scientific research achievements transformation center workshop, Longping high tech park, Furong district, Changsha City, Hunan Province 410000

Patentee before: HUNAN QINHAI DIGITAL Co.,Ltd.